https://rctn.org/w/api.php?action=feedcontributions&user=Gisely&feedformat=atomRedwoodCenter - User contributions [en]2024-03-29T15:25:38ZUser contributionsMediaWiki 1.39.4https://rctn.org/w/index.php?title=Seminars&diff=9144Seminars2018-09-07T19:16:18Z<p>Gisely: /* Instructions */</p>
<hr />
<div>== Instructions ==<br />
<br />
DON'T POST YOUR SEMINARS HERE!!!!! USE THE NEW WEBSITE: redwood.berkeley.edu/wp-admin. The seminar schedule and scheduling instructions are under Internal Resources.<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
'''January 31 2018'''<br />
* Speaker: Joel Makin<br />
* Time: 12:00<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''February 6, 2018'''<br />
* Speaker: Leenoy Mesulam<br />
* Time: 12:00<br />
* Affiliation: Princeton University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: The 1000+ neurons challenge: emergent simplicity in (very) large populations<br />
* Abstract: Recent technological progress has dramatically increased our access to the neural activity underlying memory-related tasks. These complex high-dimensional data call for theories that allow us to identify signatures of collective activity in the networks that are crucial for the emergence of cognitive functions. As an example, we study the neural activity in dorsal hippocampus as a mouse runs along a virtual linear track. One of the dominant features of this data is the activity of place cells, which fire when the animal visits particular locations. During the first stage of our work we used a maximum entropy framework to characterize the probability distribution of the joint activity patterns observed across ensembles of up to 100 cells. These models, which are equivalent to Ising models with competing interactions, make surprisingly accurate predictions for the activity of individual neurons given the state of the rest of the network, and this is true both for place cells and for non-place cells. For the second stage of our work we study networks of ~ 1500 neurons. To address this much larger system, we use different coarse graining methods, in the spirit of the renormalization group, to uncover macroscopic features the network. We see hints of scaling and of behavior that is controlled by a non-trivial fixed point. Perhaps, then, there is emergent simplicity even in these very complex systems of real neurons in the brain.<br />
<br />
<br />
'''!!! NOTE: going forward for spring term 2018, please avoid Wednesday for scheduling redwood seminars as we have the Simons brain and computation seminars that morning, so it makes for packed day to have both !!!'''<br />
<br />
<br />
'''February 21, 2018'''<br />
* Speaker: Tianshi Wang<br />
* Time: 12:00<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: <br />
* Abstract: <br />
<br />
'''April 2, 2018'''<br />
* Speaker: Pascal Fries<br />
* Time: 12:00<br />
* Affiliation: Berkeley<br />
* Host: Bruno/Dana Ballard<br />
* Status: tentative<br />
* Title: <br />
* Abstract: <br />
<br />
'''September 12, 2018'''<br />
* Speaker: Wujie Zhang<br />
* Time: 12:00<br />
* Affiliation: Yartsev lab, Berkeley<br />
* Host: Guy<br />
* Status: Confirmed<br />
* Title:<br />
* Abstract:<br />
<br />
'''September 17, 2018'''<br />
* Speaker: Juergen Jost<br />
* Time: 12:00<br />
* Affiliation: MPI for Mathematics in the Sciences, Leipzig<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''TBD, sometime in the Fall'''<br />
* Speaker: Evangelos Theodorou<br />
* Time: TBD<br />
* Affiliation: GeorgiaTech<br />
* Host: Mike/Dibyendu Mandal<br />
* Status: planning<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
<br />
'''TBD, 2016'''<br />
* Speaker: Alexander Stubbs<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Michael Levy<br />
* Status: tentative<br />
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?<br />
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2017/18 academic year ===<br />
<br />
'''July 10, 2017'''<br />
* Speaker: David Field<br />
* Time: 6:00pm<br />
* Affiliation: Cornell<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''July 18, 2017'''<br />
* Speaker: Jordi Puigbò<br />
* Time: 12:30<br />
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)<br />
* Host: Vasha<br />
* Status: Confirmed<br />
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning<br />
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.<br />
<br />
'''Aug. 14, 2017'''<br />
* Speaker: Brent Doiron<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno/Hillel<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 15, 2017'''<br />
* Speaker: Ken Miller<br />
* Time: 12:00<br />
* Affiliation: Columbia<br />
* Host: Bruno/Hillel<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 16, 2017'''<br />
* Speaker: Joshua Vogelstein<br />
* Time: 12:00<br />
* Affiliation: JHU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 6, 2017'''<br />
* Speaker: Gerald Friedland<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Jerry<br />
* Status: confirmed<br />
* Title: A Capacity Scaling Law for Artificial Neural Networks<br />
* Abstract:<br />
<br />
'''Sept. 20, 2017'''<br />
* Speaker: Carl Pabo<br />
* Time: 12:00<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Human Thought and the Human Future<br />
* Abstract:<br />
<br />
'''Oct. 11, 2017'''<br />
* Speaker: Deepak Pathak and Pulkit Agrawal<br />
* Time: 12:30 PM<br />
* Affiliation: UC Berkeley, BAIR<br />
* Host: Mayur Mudigonda<br />
* Status: Confirmed<br />
* Title: Curiosity and Rewards<br />
* Abstract:<br />
<br />
'''October 25th 2017'''<br />
* Speaker: Caleb Kalmere<br />
* Time: 12:00<br />
* Affiliation: Rice<br />
* Host: Guy Isely<br />
* Status: Confirmed<br />
* Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity<br />
* Abstract: TBD-- HMM-based hippocampal replay<br />
<br />
'''Nov. 8, 2017'''<br />
* Speaker: John Harte<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Maximum Entropy and the Inference of Patterns in Nature <br />
* Abstract:<br />
<br />
'''Nov. 16, 2017'''<br />
* Speaker: Jeff Hawkins<br />
* Time: 12:00<br />
* Affiliation: Numenta<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''November 29th 2017'''<br />
* Speaker: Joel Kaardal<br />
* Time: 12:00<br />
* Affiliation: Salk<br />
* Host: Bruno/Frederic Theunissen<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''December 13, 2017'''<br />
* Speaker: Zhaoping Li<br />
* Time: 12:00<br />
* Affiliation: UCL<br />
* Host: Bruno/Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''December 19, 2017'''<br />
* Speaker: Shaowei Lin<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Chris Hillar<br />
* Status: confirmed<br />
* Title: Biologically plausible deep learning for recurrent spiking neural networks.<br />
* Abstract: Despite widespread success in deep learning, backpropagation has been criticized for its biological implausibility. To address this issue, Hinton and Bengio have suggested that our brains are performing approximations of backpropagation, and some of their proposed models seem promising. In the same vein, we propose a different model for learning in recurrent neural networks (RNNs), known as McCulloch-Pitts processes. As opposed to traditional models for RNNs (such as LSTMs) which are based on continuous-valued neurons operating in discrete time, our model consists of discrete-valued (spiking) neurons operating in continuous time. Through our model, we are able to derive extremely simple and local learning rules, which directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).<br />
<br />
'''Jan. 24, 2018'''<br />
* Speaker: Miguel Gredilla<br />
* Time: 12:00<br />
* Affiliation: Vicarious<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2016/17 academic year ===<br />
<br />
'''Sept. 7, 2016'''<br />
* Speaker: Dan Stowell<br />
* Time: 12:00<br />
* Affiliation: Queen Mary, University of London<br />
* Host: Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 8, 2016'''<br />
* Speaker: Barb Finlay<br />
* Time: 12:00<br />
* Affiliation: Cornell Univ<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 27, 2016'''<br />
* Speaker: Yoshua Bengio<br />
* Time: 11:00<br />
* Affiliation: Univ Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Oct. 12, 2016'''<br />
* Speaker: Paul Rhodes<br />
* Time: 4:00<br />
* Affiliation: Specific Technologies<br />
* Host: Dylan/Bruno<br />
* Status: confirmed<br />
* Title: A novel and important problem in spatiotemporal pattern classification<br />
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.<br />
<br />
'''Oct. 25, 2016'''<br />
* Speaker: Douglas L. Jones<br />
* Time: 2:00<br />
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Optimal energy-efficient coding in sensory neurons<br />
* Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.<br />
<br />
'''October 26, 2016'''<br />
* Speaker: Eric Jonas<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Charles Frye<br />
* Status: confirmed<br />
* Title: Could a neuroscientist understand a microprocessor?<br />
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally. <br />
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).<br />
<br />
'''Nov. 9, 2016'''<br />
* Speaker: Pulkit Agrawal<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''Nov. 16, 2016'''<br />
* Speaker: Sebastian Musslick<br />
* Time: 12:00<br />
* Affiliation: Princeton Neuroscience Institute (Princeton University)<br />
* Host: Brian Cheung<br />
* Status: confirmed<br />
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures<br />
* Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.<br />
<br />
'''Nov 30, 2016'''<br />
* Speaker: Marcus Rohrbach<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 1st, 2017'''<br />
* Speaker: Sahar Akram<br />
* Time: 12:00<br />
* Affiliation: Starkey Hearing Research Center <br />
* Host: Shariq<br />
* Status: Confirmed<br />
* Title: Real-Time & Adaptive Auditory Neural Processing<br />
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.<br />
<br />
'''Mar 2, 2017'''<br />
* Speaker: Joszef Fiser<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Mar 22, 2017'''<br />
* Speaker: Michael Frank<br />
* Time: 12:00<br />
* Affiliation: Magicore Systems<br />
* Host: Dylan<br />
* Status: Confirmed<br />
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks<br />
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.<br />
<br />
'''April 12, 2017'''<br />
* Speaker: Aapo Hyvarinen<br />
* Time: 12:00<br />
* Affiliation: Gatsby/UCL<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 24, 2017'''<br />
* Speaker: Pierre Sermanet<br />
* Time: 12:00<br />
* Affiliation: Google Brain<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 30, 2017'''<br />
* Speaker: Heiko Schutt<br />
* Time: 12:00<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 7, 2017'''<br />
* Speaker: Saurabh Gupta<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Spencer<br />
* Status: confirmed<br />
* Title: Cognitive Mapping and Planning for Visual Navigation<br />
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.<br />
<br />
'''June 14, 2017'''<br />
* Speaker: Madhow<br />
* Time: 12:00<br />
* Affiliation: UCSB<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 19, 2017'''<br />
* Speaker: Tali Tishby<br />
* Time: 12:00<br />
* Affiliation: Hebrew Univ.<br />
* Host: Bruno/Daniel Reichman<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 21, 2017'''<br />
* Speaker: Jasmine Collins<br />
* Time: 12:00<br />
* Affiliation: Google<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: Capacity and Trainability in Recurrent Neural Networks <br />
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.<br />
<br />
=== 2015/16 academic year ===<br />
<br />
'''July 21, 2015'''<br />
* Speaker: Felix Effenberger<br />
* Affiliation: <br />
* Host: Chris H.<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 22, 2015'''<br />
* Speaker: Lav Varshney<br />
* Affiliation: Urbana-Champaign<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 23, 2015'''<br />
* Speaker: Xuemin Wei<br />
* Affiliation: Univ Penn<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 29, 2015'''<br />
* Speaker: Gonzalo Otazu<br />
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY<br />
* Host: Mike D<br />
* Status: Confirmed<br />
* Title: The Role of Cortical Feedback in Olfactory Processing<br />
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.<br />
<br />
'''Aug 19, 2015'''<br />
* Speaker: Wujie Zhang<br />
* Affiliation: Columbia<br />
* Host: Bruno/Michael Yartsev<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept 2, 2015'''<br />
* Speaker: Jeremy Maitin-Shepard<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Combinatorial Energy Learning for Image Segmentation<br />
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.<br />
<br />
'''Sept 8, 2015'''<br />
* Speaker: Jennifer Hasler<br />
* Affiliation: Georgia Tech<br />
* Host: Bruno/Mika<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''October 29, 2015'''<br />
* Speaker: Garrett Kenyon<br />
* Affiliation: Los Alamos National Laboratory<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: A Deconvolutional Competitive Algorithm (DCA)<br />
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.<br />
<br />
'''Nov 18, 2015'''<br />
* Speaker: Hillel Adesnik<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Nov 17, 2015'''<br />
* Speaker: Manuel Lopez<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''Dec 2, 2015'''<br />
* Speaker: Steven Brumby<br />
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: Seeing the Earth in the Cloud<br />
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. <br />
<br />
'''Dec 14, 2015'''<br />
* Speaker: Bill Softky <br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Screen addition - informal Redwood group seminar<br />
<br />
'''Dec 16, 2015'''<br />
* Speaker: Mike Landy<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 3, 2016'''<br />
* Speaker: Ping-Chen Huang<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 17, 2016'''<br />
* Speaker: Andrew Saxe<br />
* Affiliation: Harvard<br />
* Host: Jesse<br />
* Status: confirmed<br />
* Title: Hallmarks of Deep Learning in the Brain<br />
<br />
'''Feb 24, 2016'''<br />
* Speaker: Miguel Perpinan<br />
* Affiliation: UC Merced<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
<br />
'''Mar 1, 2016'''<br />
* Speaker: Leon Gatys<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Mar 7-9, 2016'''<br />
* NICE workshop<br />
<br />
'''Mar 9, 2016'''<br />
* Tatiana Engel - HWNI job talk at 12:00<br />
<br />
'''Mar 16, 2016'''<br />
* Talia Lerner - HWNI job talk at 12:00<br />
<br />
'''Mar 23, 2016'''<br />
* Speaker: Kwabena Boahen<br />
* Affiliation: Stanford<br />
* Host: Max Kanwal/Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''April 11, 2016'''<br />
* Speaker: Hao Su<br />
* Time: at 12:00<br />
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University<br />
* Host: Yubei<br />
* Status: confirmed<br />
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes<br />
* Abstract: Coming<br />
<br />
'''May 04, 2016'''<br />
* Speaker: Zhengya Zhang<br />
* Time: 12:00<br />
* Affiliation: Electrical Engineering and Computer Science, University of Michigan<br />
* Host: Dylan, Bruno<br />
* Status: Confirmed<br />
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification<br />
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.<br />
<br />
'''May 18, 2016'''<br />
* Speaker: Melanie Mitchell<br />
* Affiliation: Portland State University and Santa Fe Institute<br />
* Host: Dylan<br />
* Time: 12:00<br />
* Status: confirmed<br />
* Title: Using Analogy to Recognize Visual Situations<br />
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. <br />
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.<br />
<br />
'''June 8, 2016'''<br />
* Speaker: Kris Bouchard<br />
* Time: 12:00<br />
* Affiliation: LBNL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: The union of intersections method<br />
* Abstract:<br />
<br />
'''June 15, 2016'''<br />
* Speaker: James Blackmon<br />
* Time: 12:00<br />
* Affiliation: San Francisco State University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'''June 24, 2015'''<br />
* Speaker: Kendrick Kay<br />
* Affiliation: Department of Psychology, Washington University in St. Louis<br />
* Host: Karl<br />
* Status: Confirmed<br />
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system<br />
* Abstract<br />
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=9143Seminars2018-08-17T19:48:38Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
'''January 31 2018'''<br />
* Speaker: Joel Makin<br />
* Time: 12:00<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''February 6, 2018'''<br />
* Speaker: Leenoy Mesulam<br />
* Time: 12:00<br />
* Affiliation: Princeton University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: The 1000+ neurons challenge: emergent simplicity in (very) large populations<br />
* Abstract: Recent technological progress has dramatically increased our access to the neural activity underlying memory-related tasks. These complex high-dimensional data call for theories that allow us to identify signatures of collective activity in the networks that are crucial for the emergence of cognitive functions. As an example, we study the neural activity in dorsal hippocampus as a mouse runs along a virtual linear track. One of the dominant features of this data is the activity of place cells, which fire when the animal visits particular locations. During the first stage of our work we used a maximum entropy framework to characterize the probability distribution of the joint activity patterns observed across ensembles of up to 100 cells. These models, which are equivalent to Ising models with competing interactions, make surprisingly accurate predictions for the activity of individual neurons given the state of the rest of the network, and this is true both for place cells and for non-place cells. For the second stage of our work we study networks of ~ 1500 neurons. To address this much larger system, we use different coarse graining methods, in the spirit of the renormalization group, to uncover macroscopic features the network. We see hints of scaling and of behavior that is controlled by a non-trivial fixed point. Perhaps, then, there is emergent simplicity even in these very complex systems of real neurons in the brain.<br />
<br />
<br />
'''!!! NOTE: going forward for spring term 2018, please avoid Wednesday for scheduling redwood seminars as we have the Simons brain and computation seminars that morning, so it makes for packed day to have both !!!'''<br />
<br />
<br />
'''February 21, 2018'''<br />
* Speaker: Tianshi Wang<br />
* Time: 12:00<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: <br />
* Abstract: <br />
<br />
'''April 2, 2018'''<br />
* Speaker: Pascal Fries<br />
* Time: 12:00<br />
* Affiliation: Berkeley<br />
* Host: Bruno/Dana Ballard<br />
* Status: tentative<br />
* Title: <br />
* Abstract: <br />
<br />
'''September 12, 2018'''<br />
* Speaker: Wujie Zhang<br />
* Time: 12:00<br />
* Affiliation: Yartsev lab, Berkeley<br />
* Host: Guy<br />
* Status: Confirmed<br />
* Title:<br />
* Abstract:<br />
<br />
'''September 17, 2018'''<br />
* Speaker: Juergen Jost<br />
* Time: 12:00<br />
* Affiliation: MPI for Mathematics in the Sciences, Leipzig<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''TBD, sometime in the Fall'''<br />
* Speaker: Evangelos Theodorou<br />
* Time: TBD<br />
* Affiliation: GeorgiaTech<br />
* Host: Mike/Dibyendu Mandal<br />
* Status: planning<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
<br />
'''TBD, 2016'''<br />
* Speaker: Alexander Stubbs<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Michael Levy<br />
* Status: tentative<br />
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?<br />
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2017/18 academic year ===<br />
<br />
'''July 10, 2017'''<br />
* Speaker: David Field<br />
* Time: 6:00pm<br />
* Affiliation: Cornell<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''July 18, 2017'''<br />
* Speaker: Jordi Puigbò<br />
* Time: 12:30<br />
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)<br />
* Host: Vasha<br />
* Status: Confirmed<br />
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning<br />
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.<br />
<br />
'''Aug. 14, 2017'''<br />
* Speaker: Brent Doiron<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno/Hillel<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 15, 2017'''<br />
* Speaker: Ken Miller<br />
* Time: 12:00<br />
* Affiliation: Columbia<br />
* Host: Bruno/Hillel<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 16, 2017'''<br />
* Speaker: Joshua Vogelstein<br />
* Time: 12:00<br />
* Affiliation: JHU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 6, 2017'''<br />
* Speaker: Gerald Friedland<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Jerry<br />
* Status: confirmed<br />
* Title: A Capacity Scaling Law for Artificial Neural Networks<br />
* Abstract:<br />
<br />
'''Sept. 20, 2017'''<br />
* Speaker: Carl Pabo<br />
* Time: 12:00<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Human Thought and the Human Future<br />
* Abstract:<br />
<br />
'''Oct. 11, 2017'''<br />
* Speaker: Deepak Pathak and Pulkit Agrawal<br />
* Time: 12:30 PM<br />
* Affiliation: UC Berkeley, BAIR<br />
* Host: Mayur Mudigonda<br />
* Status: Confirmed<br />
* Title: Curiosity and Rewards<br />
* Abstract:<br />
<br />
'''October 25th 2017'''<br />
* Speaker: Caleb Kalmere<br />
* Time: 12:00<br />
* Affiliation: Rice<br />
* Host: Guy Isely<br />
* Status: Confirmed<br />
* Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity<br />
* Abstract: TBD-- HMM-based hippocampal replay<br />
<br />
'''Nov. 8, 2017'''<br />
* Speaker: John Harte<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Maximum Entropy and the Inference of Patterns in Nature <br />
* Abstract:<br />
<br />
'''Nov. 16, 2017'''<br />
* Speaker: Jeff Hawkins<br />
* Time: 12:00<br />
* Affiliation: Numenta<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''November 29th 2017'''<br />
* Speaker: Joel Kaardal<br />
* Time: 12:00<br />
* Affiliation: Salk<br />
* Host: Bruno/Frederic Theunissen<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''December 13, 2017'''<br />
* Speaker: Zhaoping Li<br />
* Time: 12:00<br />
* Affiliation: UCL<br />
* Host: Bruno/Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''December 19, 2017'''<br />
* Speaker: Shaowei Lin<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Chris Hillar<br />
* Status: confirmed<br />
* Title: Biologically plausible deep learning for recurrent spiking neural networks.<br />
* Abstract: Despite widespread success in deep learning, backpropagation has been criticized for its biological implausibility. To address this issue, Hinton and Bengio have suggested that our brains are performing approximations of backpropagation, and some of their proposed models seem promising. In the same vein, we propose a different model for learning in recurrent neural networks (RNNs), known as McCulloch-Pitts processes. As opposed to traditional models for RNNs (such as LSTMs) which are based on continuous-valued neurons operating in discrete time, our model consists of discrete-valued (spiking) neurons operating in continuous time. Through our model, we are able to derive extremely simple and local learning rules, which directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).<br />
<br />
'''Jan. 24, 2018'''<br />
* Speaker: Miguel Gredilla<br />
* Time: 12:00<br />
* Affiliation: Vicarious<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2016/17 academic year ===<br />
<br />
'''Sept. 7, 2016'''<br />
* Speaker: Dan Stowell<br />
* Time: 12:00<br />
* Affiliation: Queen Mary, University of London<br />
* Host: Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 8, 2016'''<br />
* Speaker: Barb Finlay<br />
* Time: 12:00<br />
* Affiliation: Cornell Univ<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 27, 2016'''<br />
* Speaker: Yoshua Bengio<br />
* Time: 11:00<br />
* Affiliation: Univ Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Oct. 12, 2016'''<br />
* Speaker: Paul Rhodes<br />
* Time: 4:00<br />
* Affiliation: Specific Technologies<br />
* Host: Dylan/Bruno<br />
* Status: confirmed<br />
* Title: A novel and important problem in spatiotemporal pattern classification<br />
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.<br />
<br />
'''Oct. 25, 2016'''<br />
* Speaker: Douglas L. Jones<br />
* Time: 2:00<br />
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Optimal energy-efficient coding in sensory neurons<br />
* Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.<br />
<br />
'''October 26, 2016'''<br />
* Speaker: Eric Jonas<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Charles Frye<br />
* Status: confirmed<br />
* Title: Could a neuroscientist understand a microprocessor?<br />
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally. <br />
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).<br />
<br />
'''Nov. 9, 2016'''<br />
* Speaker: Pulkit Agrawal<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''Nov. 16, 2016'''<br />
* Speaker: Sebastian Musslick<br />
* Time: 12:00<br />
* Affiliation: Princeton Neuroscience Institute (Princeton University)<br />
* Host: Brian Cheung<br />
* Status: confirmed<br />
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures<br />
* Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.<br />
<br />
'''Nov 30, 2016'''<br />
* Speaker: Marcus Rohrbach<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 1st, 2017'''<br />
* Speaker: Sahar Akram<br />
* Time: 12:00<br />
* Affiliation: Starkey Hearing Research Center <br />
* Host: Shariq<br />
* Status: Confirmed<br />
* Title: Real-Time & Adaptive Auditory Neural Processing<br />
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.<br />
<br />
'''Mar 2, 2017'''<br />
* Speaker: Joszef Fiser<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Mar 22, 2017'''<br />
* Speaker: Michael Frank<br />
* Time: 12:00<br />
* Affiliation: Magicore Systems<br />
* Host: Dylan<br />
* Status: Confirmed<br />
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks<br />
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.<br />
<br />
'''April 12, 2017'''<br />
* Speaker: Aapo Hyvarinen<br />
* Time: 12:00<br />
* Affiliation: Gatsby/UCL<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 24, 2017'''<br />
* Speaker: Pierre Sermanet<br />
* Time: 12:00<br />
* Affiliation: Google Brain<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 30, 2017'''<br />
* Speaker: Heiko Schutt<br />
* Time: 12:00<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 7, 2017'''<br />
* Speaker: Saurabh Gupta<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Spencer<br />
* Status: confirmed<br />
* Title: Cognitive Mapping and Planning for Visual Navigation<br />
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.<br />
<br />
'''June 14, 2017'''<br />
* Speaker: Madhow<br />
* Time: 12:00<br />
* Affiliation: UCSB<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 19, 2017'''<br />
* Speaker: Tali Tishby<br />
* Time: 12:00<br />
* Affiliation: Hebrew Univ.<br />
* Host: Bruno/Daniel Reichman<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 21, 2017'''<br />
* Speaker: Jasmine Collins<br />
* Time: 12:00<br />
* Affiliation: Google<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: Capacity and Trainability in Recurrent Neural Networks <br />
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.<br />
<br />
=== 2015/16 academic year ===<br />
<br />
'''July 21, 2015'''<br />
* Speaker: Felix Effenberger<br />
* Affiliation: <br />
* Host: Chris H.<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 22, 2015'''<br />
* Speaker: Lav Varshney<br />
* Affiliation: Urbana-Champaign<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 23, 2015'''<br />
* Speaker: Xuemin Wei<br />
* Affiliation: Univ Penn<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 29, 2015'''<br />
* Speaker: Gonzalo Otazu<br />
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY<br />
* Host: Mike D<br />
* Status: Confirmed<br />
* Title: The Role of Cortical Feedback in Olfactory Processing<br />
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.<br />
<br />
'''Aug 19, 2015'''<br />
* Speaker: Wujie Zhang<br />
* Affiliation: Columbia<br />
* Host: Bruno/Michael Yartsev<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept 2, 2015'''<br />
* Speaker: Jeremy Maitin-Shepard<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Combinatorial Energy Learning for Image Segmentation<br />
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.<br />
<br />
'''Sept 8, 2015'''<br />
* Speaker: Jennifer Hasler<br />
* Affiliation: Georgia Tech<br />
* Host: Bruno/Mika<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''October 29, 2015'''<br />
* Speaker: Garrett Kenyon<br />
* Affiliation: Los Alamos National Laboratory<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: A Deconvolutional Competitive Algorithm (DCA)<br />
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.<br />
<br />
'''Nov 18, 2015'''<br />
* Speaker: Hillel Adesnik<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Nov 17, 2015'''<br />
* Speaker: Manuel Lopez<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''Dec 2, 2015'''<br />
* Speaker: Steven Brumby<br />
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: Seeing the Earth in the Cloud<br />
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. <br />
<br />
'''Dec 14, 2015'''<br />
* Speaker: Bill Softky <br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Screen addition - informal Redwood group seminar<br />
<br />
'''Dec 16, 2015'''<br />
* Speaker: Mike Landy<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 3, 2016'''<br />
* Speaker: Ping-Chen Huang<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 17, 2016'''<br />
* Speaker: Andrew Saxe<br />
* Affiliation: Harvard<br />
* Host: Jesse<br />
* Status: confirmed<br />
* Title: Hallmarks of Deep Learning in the Brain<br />
<br />
'''Feb 24, 2016'''<br />
* Speaker: Miguel Perpinan<br />
* Affiliation: UC Merced<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
<br />
'''Mar 1, 2016'''<br />
* Speaker: Leon Gatys<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Mar 7-9, 2016'''<br />
* NICE workshop<br />
<br />
'''Mar 9, 2016'''<br />
* Tatiana Engel - HWNI job talk at 12:00<br />
<br />
'''Mar 16, 2016'''<br />
* Talia Lerner - HWNI job talk at 12:00<br />
<br />
'''Mar 23, 2016'''<br />
* Speaker: Kwabena Boahen<br />
* Affiliation: Stanford<br />
* Host: Max Kanwal/Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''April 11, 2016'''<br />
* Speaker: Hao Su<br />
* Time: at 12:00<br />
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University<br />
* Host: Yubei<br />
* Status: confirmed<br />
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes<br />
* Abstract: Coming<br />
<br />
'''May 04, 2016'''<br />
* Speaker: Zhengya Zhang<br />
* Time: 12:00<br />
* Affiliation: Electrical Engineering and Computer Science, University of Michigan<br />
* Host: Dylan, Bruno<br />
* Status: Confirmed<br />
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification<br />
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.<br />
<br />
'''May 18, 2016'''<br />
* Speaker: Melanie Mitchell<br />
* Affiliation: Portland State University and Santa Fe Institute<br />
* Host: Dylan<br />
* Time: 12:00<br />
* Status: confirmed<br />
* Title: Using Analogy to Recognize Visual Situations<br />
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. <br />
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.<br />
<br />
'''June 8, 2016'''<br />
* Speaker: Kris Bouchard<br />
* Time: 12:00<br />
* Affiliation: LBNL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: The union of intersections method<br />
* Abstract:<br />
<br />
'''June 15, 2016'''<br />
* Speaker: James Blackmon<br />
* Time: 12:00<br />
* Affiliation: San Francisco State University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'''June 24, 2015'''<br />
* Speaker: Kendrick Kay<br />
* Affiliation: Department of Psychology, Washington University in St. Louis<br />
* Host: Karl<br />
* Status: Confirmed<br />
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system<br />
* Abstract<br />
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=9025Seminars2017-10-03T00:09:18Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
<br />
'''Oct. 11, 2017'''<br />
* Speaker: Deepak Pathak and Pulkit Agrawal<br />
* Time: 12:30 PM<br />
* Affiliation: UC Berkeley, BAIR<br />
* Host: Mayur Mudigonda<br />
* Status: Confirmed<br />
* Title: Curiosity and Rewards<br />
* Abstract:<br />
<br />
'''Nov. 8, 2017'''<br />
* Speaker: John Harte<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Maximum Entropy and the Inference of Patterns in Nature <br />
* Abstract:<br />
<br />
'''TBD, sometime in the Fall'''<br />
* Speaker: Evangelos Theodorou<br />
* Time: TBD<br />
* Affiliation: GeorgiaTech<br />
* Host: Mike/Dibyendu Mandal<br />
* Status: planning<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''October 25th 2017'''<br />
* Speaker: Caleb Kalmere<br />
* Time: 12:00<br />
* Affiliation: Rice<br />
* Host: Guy Isely<br />
* Status: Confirmed<br />
* Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity<br />
* Abstract: TBD-- HMM-based hippocampal replay<br />
<br />
<br />
'''TBD, 2016'''<br />
* Speaker: Alexander Stubbs<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Michael Levy<br />
* Status: tentative<br />
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?<br />
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2017/18 academic year ===<br />
<br />
'''July 10, 2017'''<br />
* Speaker: David Field<br />
* Time: 6:00pm<br />
* Affiliation: Cornell<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''July 18, 2017'''<br />
* Speaker: Jordi Puigbò<br />
* Time: 12:30<br />
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)<br />
* Host: Vasha<br />
* Status: Confirmed<br />
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning<br />
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.<br />
<br />
'''Aug. 14, 2017'''<br />
* Speaker: Brent Doiron<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno/Hillel<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 15, 2017'''<br />
* Speaker: Ken Miller<br />
* Time: 12:00<br />
* Affiliation: Columbia<br />
* Host: Bruno/Hillel<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 16, 2017'''<br />
* Speaker: Joshua Vogelstein<br />
* Time: 12:00<br />
* Affiliation: JHU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 6, 2017'''<br />
* Speaker: Gerald Friedland<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Jerry<br />
* Status: confirmed<br />
* Title: A Capacity Scaling Law for Artificial Neural Networks<br />
* Abstract:<br />
<br />
'''Sept. 20, 2017'''<br />
* Speaker: Carl Pabo<br />
* Time: 12:00<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Human Thought and the Human Future<br />
* Abstract:<br />
<br />
=== 2016/17 academic year ===<br />
<br />
'''Sept. 7, 2016'''<br />
* Speaker: Dan Stowell<br />
* Time: 12:00<br />
* Affiliation: Queen Mary, University of London<br />
* Host: Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 8, 2016'''<br />
* Speaker: Barb Finlay<br />
* Time: 12:00<br />
* Affiliation: Cornell Univ<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 27, 2016'''<br />
* Speaker: Yoshua Bengio<br />
* Time: 11:00<br />
* Affiliation: Univ Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Oct. 12, 2016'''<br />
* Speaker: Paul Rhodes<br />
* Time: 4:00<br />
* Affiliation: Specific Technologies<br />
* Host: Dylan/Bruno<br />
* Status: confirmed<br />
* Title: A novel and important problem in spatiotemporal pattern classification<br />
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.<br />
<br />
'''Oct. 25, 2016'''<br />
* Speaker: Douglas L. Jones<br />
* Time: 2:00<br />
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Optimal energy-efficient coding in sensory neurons<br />
* Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.<br />
<br />
'''October 26, 2016'''<br />
* Speaker: Eric Jonas<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Charles Frye<br />
* Status: confirmed<br />
* Title: Could a neuroscientist understand a microprocessor?<br />
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally. <br />
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).<br />
<br />
'''Nov. 9, 2016'''<br />
* Speaker: Pulkit Agrawal<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''Nov. 16, 2016'''<br />
* Speaker: Sebastian Musslick<br />
* Time: 12:00<br />
* Affiliation: Princeton Neuroscience Institute (Princeton University)<br />
* Host: Brian Cheung<br />
* Status: confirmed<br />
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures<br />
* Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.<br />
<br />
'''Nov 30, 2016'''<br />
* Speaker: Marcus Rohrbach<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 1st, 2017'''<br />
* Speaker: Sahar Akram<br />
* Time: 12:00<br />
* Affiliation: Starkey Hearing Research Center <br />
* Host: Shariq<br />
* Status: Confirmed<br />
* Title: Real-Time & Adaptive Auditory Neural Processing<br />
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.<br />
<br />
'''Mar 2, 2017'''<br />
* Speaker: Joszef Fiser<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Mar 22, 2017'''<br />
* Speaker: Michael Frank<br />
* Time: 12:00<br />
* Affiliation: Magicore Systems<br />
* Host: Dylan<br />
* Status: Confirmed<br />
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks<br />
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.<br />
<br />
'''April 12, 2017'''<br />
* Speaker: Aapo Hyvarinen<br />
* Time: 12:00<br />
* Affiliation: Gatsby/UCL<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 24, 2017'''<br />
* Speaker: Pierre Sermanet<br />
* Time: 12:00<br />
* Affiliation: Google Brain<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 30, 2017'''<br />
* Speaker: Heiko Schutt<br />
* Time: 12:00<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 7, 2017'''<br />
* Speaker: Saurabh Gupta<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Spencer<br />
* Status: confirmed<br />
* Title: Cognitive Mapping and Planning for Visual Navigation<br />
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.<br />
<br />
'''June 14, 2017'''<br />
* Speaker: Madhow<br />
* Time: 12:00<br />
* Affiliation: UCSB<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 19, 2017'''<br />
* Speaker: Tali Tishby<br />
* Time: 12:00<br />
* Affiliation: Hebrew Univ.<br />
* Host: Bruno/Daniel Reichman<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 21, 2017'''<br />
* Speaker: Jasmine Collins<br />
* Time: 12:00<br />
* Affiliation: Google<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: Capacity and Trainability in Recurrent Neural Networks <br />
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.<br />
<br />
=== 2015/16 academic year ===<br />
<br />
'''July 21, 2015'''<br />
* Speaker: Felix Effenberger<br />
* Affiliation: <br />
* Host: Chris H.<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 22, 2015'''<br />
* Speaker: Lav Varshney<br />
* Affiliation: Urbana-Champaign<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 23, 2015'''<br />
* Speaker: Xuemin Wei<br />
* Affiliation: Univ Penn<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 29, 2015'''<br />
* Speaker: Gonzalo Otazu<br />
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY<br />
* Host: Mike D<br />
* Status: Confirmed<br />
* Title: The Role of Cortical Feedback in Olfactory Processing<br />
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.<br />
<br />
'''Aug 19, 2015'''<br />
* Speaker: Wujie Zhang<br />
* Affiliation: Columbia<br />
* Host: Bruno/Michael Yartsev<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept 2, 2015'''<br />
* Speaker: Jeremy Maitin-Shepard<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Combinatorial Energy Learning for Image Segmentation<br />
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.<br />
<br />
'''Sept 8, 2015'''<br />
* Speaker: Jennifer Hasler<br />
* Affiliation: Georgia Tech<br />
* Host: Bruno/Mika<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''October 29, 2015'''<br />
* Speaker: Garrett Kenyon<br />
* Affiliation: Los Alamos National Laboratory<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: A Deconvolutional Competitive Algorithm (DCA)<br />
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.<br />
<br />
'''Nov 18, 2015'''<br />
* Speaker: Hillel Adesnik<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Nov 17, 2015'''<br />
* Speaker: Manuel Lopez<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''Dec 2, 2015'''<br />
* Speaker: Steven Brumby<br />
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: Seeing the Earth in the Cloud<br />
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. <br />
<br />
'''Dec 14, 2015'''<br />
* Speaker: Bill Softky <br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Screen addition - informal Redwood group seminar<br />
<br />
'''Dec 16, 2015'''<br />
* Speaker: Mike Landy<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 3, 2016'''<br />
* Speaker: Ping-Chen Huang<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 17, 2016'''<br />
* Speaker: Andrew Saxe<br />
* Affiliation: Harvard<br />
* Host: Jesse<br />
* Status: confirmed<br />
* Title: Hallmarks of Deep Learning in the Brain<br />
<br />
'''Feb 24, 2016'''<br />
* Speaker: Miguel Perpinan<br />
* Affiliation: UC Merced<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
<br />
'''Mar 1, 2016'''<br />
* Speaker: Leon Gatys<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Mar 7-9, 2016'''<br />
* NICE workshop<br />
<br />
'''Mar 9, 2016'''<br />
* Tatiana Engel - HWNI job talk at 12:00<br />
<br />
'''Mar 16, 2016'''<br />
* Talia Lerner - HWNI job talk at 12:00<br />
<br />
'''Mar 23, 2016'''<br />
* Speaker: Kwabena Boahen<br />
* Affiliation: Stanford<br />
* Host: Max Kanwal/Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''April 11, 2016'''<br />
* Speaker: Hao Su<br />
* Time: at 12:00<br />
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University<br />
* Host: Yubei<br />
* Status: confirmed<br />
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes<br />
* Abstract: Coming<br />
<br />
'''May 04, 2016'''<br />
* Speaker: Zhengya Zhang<br />
* Time: 12:00<br />
* Affiliation: Electrical Engineering and Computer Science, University of Michigan<br />
* Host: Dylan, Bruno<br />
* Status: Confirmed<br />
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification<br />
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.<br />
<br />
'''May 18, 2016'''<br />
* Speaker: Melanie Mitchell<br />
* Affiliation: Portland State University and Santa Fe Institute<br />
* Host: Dylan<br />
* Time: 12:00<br />
* Status: confirmed<br />
* Title: Using Analogy to Recognize Visual Situations<br />
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. <br />
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.<br />
<br />
'''June 8, 2016'''<br />
* Speaker: Kris Bouchard<br />
* Time: 12:00<br />
* Affiliation: LBNL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: The union of intersections method<br />
* Abstract:<br />
<br />
'''June 15, 2016'''<br />
* Speaker: James Blackmon<br />
* Time: 12:00<br />
* Affiliation: San Francisco State University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'''June 24, 2015'''<br />
* Speaker: Kendrick Kay<br />
* Affiliation: Department of Psychology, Washington University in St. Louis<br />
* Host: Karl<br />
* Status: Confirmed<br />
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system<br />
* Abstract<br />
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=9024Seminars2017-10-03T00:08:39Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
<br />
'''Oct. 11, 2017'''<br />
* Speaker: Deepak Pathak and Pulkit Agrawal<br />
* Time: 12:30 PM<br />
* Affiliation: UC Berkeley, BAIR<br />
* Host: Mayur Mudigonda<br />
* Status: Confirmed<br />
* Title: Curiosity and Rewards<br />
* Abstract:<br />
<br />
'''Nov. 8, 2017'''<br />
* Speaker: John Harte<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Maximum Entropy and the Inference of Patterns in Nature <br />
* Abstract:<br />
<br />
'''TBD, sometime in the Fall'''<br />
* Speaker: Evangelos Theodorou<br />
* Time: TBD<br />
* Affiliation: GeorgiaTech<br />
* Host: Mike/Dibyendu Mandal<br />
* Status: planning<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''September 25th 2017'''<br />
* Speaker: Caleb Kalmere<br />
* Time: 12:00<br />
* Affiliation: Rice<br />
* Host: Guy Isely<br />
* Status: Confirmed<br />
* Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity<br />
* Abstract: TBD-- HMM-based hippocampal replay<br />
<br />
<br />
'''TBD, 2016'''<br />
* Speaker: Alexander Stubbs<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Michael Levy<br />
* Status: tentative<br />
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?<br />
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2017/18 academic year ===<br />
<br />
'''July 10, 2017'''<br />
* Speaker: David Field<br />
* Time: 6:00pm<br />
* Affiliation: Cornell<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''July 18, 2017'''<br />
* Speaker: Jordi Puigbò<br />
* Time: 12:30<br />
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)<br />
* Host: Vasha<br />
* Status: Confirmed<br />
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning<br />
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.<br />
<br />
'''Aug. 14, 2017'''<br />
* Speaker: Brent Doiron<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno/Hillel<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 15, 2017'''<br />
* Speaker: Ken Miller<br />
* Time: 12:00<br />
* Affiliation: Columbia<br />
* Host: Bruno/Hillel<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 16, 2017'''<br />
* Speaker: Joshua Vogelstein<br />
* Time: 12:00<br />
* Affiliation: JHU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 6, 2017'''<br />
* Speaker: Gerald Friedland<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Jerry<br />
* Status: confirmed<br />
* Title: A Capacity Scaling Law for Artificial Neural Networks<br />
* Abstract:<br />
<br />
'''Sept. 20, 2017'''<br />
* Speaker: Carl Pabo<br />
* Time: 12:00<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Human Thought and the Human Future<br />
* Abstract:<br />
<br />
=== 2016/17 academic year ===<br />
<br />
'''Sept. 7, 2016'''<br />
* Speaker: Dan Stowell<br />
* Time: 12:00<br />
* Affiliation: Queen Mary, University of London<br />
* Host: Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 8, 2016'''<br />
* Speaker: Barb Finlay<br />
* Time: 12:00<br />
* Affiliation: Cornell Univ<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 27, 2016'''<br />
* Speaker: Yoshua Bengio<br />
* Time: 11:00<br />
* Affiliation: Univ Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Oct. 12, 2016'''<br />
* Speaker: Paul Rhodes<br />
* Time: 4:00<br />
* Affiliation: Specific Technologies<br />
* Host: Dylan/Bruno<br />
* Status: confirmed<br />
* Title: A novel and important problem in spatiotemporal pattern classification<br />
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.<br />
<br />
'''Oct. 25, 2016'''<br />
* Speaker: Douglas L. Jones<br />
* Time: 2:00<br />
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Optimal energy-efficient coding in sensory neurons<br />
* Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.<br />
<br />
'''October 26, 2016'''<br />
* Speaker: Eric Jonas<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Charles Frye<br />
* Status: confirmed<br />
* Title: Could a neuroscientist understand a microprocessor?<br />
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally. <br />
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).<br />
<br />
'''Nov. 9, 2016'''<br />
* Speaker: Pulkit Agrawal<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''Nov. 16, 2016'''<br />
* Speaker: Sebastian Musslick<br />
* Time: 12:00<br />
* Affiliation: Princeton Neuroscience Institute (Princeton University)<br />
* Host: Brian Cheung<br />
* Status: confirmed<br />
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures<br />
* Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.<br />
<br />
'''Nov 30, 2016'''<br />
* Speaker: Marcus Rohrbach<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 1st, 2017'''<br />
* Speaker: Sahar Akram<br />
* Time: 12:00<br />
* Affiliation: Starkey Hearing Research Center <br />
* Host: Shariq<br />
* Status: Confirmed<br />
* Title: Real-Time & Adaptive Auditory Neural Processing<br />
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.<br />
<br />
'''Mar 2, 2017'''<br />
* Speaker: Joszef Fiser<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Mar 22, 2017'''<br />
* Speaker: Michael Frank<br />
* Time: 12:00<br />
* Affiliation: Magicore Systems<br />
* Host: Dylan<br />
* Status: Confirmed<br />
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks<br />
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.<br />
<br />
'''April 12, 2017'''<br />
* Speaker: Aapo Hyvarinen<br />
* Time: 12:00<br />
* Affiliation: Gatsby/UCL<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 24, 2017'''<br />
* Speaker: Pierre Sermanet<br />
* Time: 12:00<br />
* Affiliation: Google Brain<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 30, 2017'''<br />
* Speaker: Heiko Schutt<br />
* Time: 12:00<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 7, 2017'''<br />
* Speaker: Saurabh Gupta<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Spencer<br />
* Status: confirmed<br />
* Title: Cognitive Mapping and Planning for Visual Navigation<br />
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.<br />
<br />
'''June 14, 2017'''<br />
* Speaker: Madhow<br />
* Time: 12:00<br />
* Affiliation: UCSB<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 19, 2017'''<br />
* Speaker: Tali Tishby<br />
* Time: 12:00<br />
* Affiliation: Hebrew Univ.<br />
* Host: Bruno/Daniel Reichman<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 21, 2017'''<br />
* Speaker: Jasmine Collins<br />
* Time: 12:00<br />
* Affiliation: Google<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: Capacity and Trainability in Recurrent Neural Networks <br />
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.<br />
<br />
=== 2015/16 academic year ===<br />
<br />
'''July 21, 2015'''<br />
* Speaker: Felix Effenberger<br />
* Affiliation: <br />
* Host: Chris H.<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 22, 2015'''<br />
* Speaker: Lav Varshney<br />
* Affiliation: Urbana-Champaign<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 23, 2015'''<br />
* Speaker: Xuemin Wei<br />
* Affiliation: Univ Penn<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 29, 2015'''<br />
* Speaker: Gonzalo Otazu<br />
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY<br />
* Host: Mike D<br />
* Status: Confirmed<br />
* Title: The Role of Cortical Feedback in Olfactory Processing<br />
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.<br />
<br />
'''Aug 19, 2015'''<br />
* Speaker: Wujie Zhang<br />
* Affiliation: Columbia<br />
* Host: Bruno/Michael Yartsev<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept 2, 2015'''<br />
* Speaker: Jeremy Maitin-Shepard<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Combinatorial Energy Learning for Image Segmentation<br />
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.<br />
<br />
'''Sept 8, 2015'''<br />
* Speaker: Jennifer Hasler<br />
* Affiliation: Georgia Tech<br />
* Host: Bruno/Mika<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''October 29, 2015'''<br />
* Speaker: Garrett Kenyon<br />
* Affiliation: Los Alamos National Laboratory<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: A Deconvolutional Competitive Algorithm (DCA)<br />
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.<br />
<br />
'''Nov 18, 2015'''<br />
* Speaker: Hillel Adesnik<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Nov 17, 2015'''<br />
* Speaker: Manuel Lopez<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''Dec 2, 2015'''<br />
* Speaker: Steven Brumby<br />
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: Seeing the Earth in the Cloud<br />
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. <br />
<br />
'''Dec 14, 2015'''<br />
* Speaker: Bill Softky <br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Screen addition - informal Redwood group seminar<br />
<br />
'''Dec 16, 2015'''<br />
* Speaker: Mike Landy<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 3, 2016'''<br />
* Speaker: Ping-Chen Huang<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 17, 2016'''<br />
* Speaker: Andrew Saxe<br />
* Affiliation: Harvard<br />
* Host: Jesse<br />
* Status: confirmed<br />
* Title: Hallmarks of Deep Learning in the Brain<br />
<br />
'''Feb 24, 2016'''<br />
* Speaker: Miguel Perpinan<br />
* Affiliation: UC Merced<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
<br />
'''Mar 1, 2016'''<br />
* Speaker: Leon Gatys<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Mar 7-9, 2016'''<br />
* NICE workshop<br />
<br />
'''Mar 9, 2016'''<br />
* Tatiana Engel - HWNI job talk at 12:00<br />
<br />
'''Mar 16, 2016'''<br />
* Talia Lerner - HWNI job talk at 12:00<br />
<br />
'''Mar 23, 2016'''<br />
* Speaker: Kwabena Boahen<br />
* Affiliation: Stanford<br />
* Host: Max Kanwal/Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''April 11, 2016'''<br />
* Speaker: Hao Su<br />
* Time: at 12:00<br />
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University<br />
* Host: Yubei<br />
* Status: confirmed<br />
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes<br />
* Abstract: Coming<br />
<br />
'''May 04, 2016'''<br />
* Speaker: Zhengya Zhang<br />
* Time: 12:00<br />
* Affiliation: Electrical Engineering and Computer Science, University of Michigan<br />
* Host: Dylan, Bruno<br />
* Status: Confirmed<br />
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification<br />
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.<br />
<br />
'''May 18, 2016'''<br />
* Speaker: Melanie Mitchell<br />
* Affiliation: Portland State University and Santa Fe Institute<br />
* Host: Dylan<br />
* Time: 12:00<br />
* Status: confirmed<br />
* Title: Using Analogy to Recognize Visual Situations<br />
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. <br />
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.<br />
<br />
'''June 8, 2016'''<br />
* Speaker: Kris Bouchard<br />
* Time: 12:00<br />
* Affiliation: LBNL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: The union of intersections method<br />
* Abstract:<br />
<br />
'''June 15, 2016'''<br />
* Speaker: James Blackmon<br />
* Time: 12:00<br />
* Affiliation: San Francisco State University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'''June 24, 2015'''<br />
* Speaker: Kendrick Kay<br />
* Affiliation: Department of Psychology, Washington University in St. Louis<br />
* Host: Karl<br />
* Status: Confirmed<br />
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system<br />
* Abstract<br />
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=9023Seminars2017-10-03T00:08:18Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
<br />
'''Oct. 11, 2017'''<br />
* Speaker: Deepak Pathak and Pulkit Agrawal<br />
* Time: 12:30 PM<br />
* Affiliation: UC Berkeley, BAIR<br />
* Host: Mayur Mudigonda<br />
* Status: Confirmed<br />
* Title: Curiosity and Rewards<br />
* Abstract:<br />
<br />
'''Nov. 8, 2017'''<br />
* Speaker: John Harte<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Maximum Entropy and the Inference of Patterns in Nature <br />
* Abstract:<br />
<br />
'''TBD, sometime in the Fall'''<br />
* Speaker: Evangelos Theodorou<br />
* Time: TBD<br />
* Affiliation: GeorgiaTech<br />
* Host: Mike/Dibyendu Mandal<br />
* Status: planning<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''September 25th 2017'''<br />
* Speaker: Caleb Kalmere<br />
* Time: 12:00<br />
* Affiliation: Rice<br />
* Host: Guy Isely/Sean MacKesey<br />
* Status: Confirmed<br />
* Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity<br />
* Abstract: TBD-- HMM-based hippocampal replay<br />
<br />
<br />
'''TBD, 2016'''<br />
* Speaker: Alexander Stubbs<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Michael Levy<br />
* Status: tentative<br />
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?<br />
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2017/18 academic year ===<br />
<br />
'''July 10, 2017'''<br />
* Speaker: David Field<br />
* Time: 6:00pm<br />
* Affiliation: Cornell<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''July 18, 2017'''<br />
* Speaker: Jordi Puigbò<br />
* Time: 12:30<br />
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)<br />
* Host: Vasha<br />
* Status: Confirmed<br />
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning<br />
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.<br />
<br />
'''Aug. 14, 2017'''<br />
* Speaker: Brent Doiron<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno/Hillel<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 15, 2017'''<br />
* Speaker: Ken Miller<br />
* Time: 12:00<br />
* Affiliation: Columbia<br />
* Host: Bruno/Hillel<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Aug. 16, 2017'''<br />
* Speaker: Joshua Vogelstein<br />
* Time: 12:00<br />
* Affiliation: JHU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 6, 2017'''<br />
* Speaker: Gerald Friedland<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Jerry<br />
* Status: confirmed<br />
* Title: A Capacity Scaling Law for Artificial Neural Networks<br />
* Abstract:<br />
<br />
'''Sept. 20, 2017'''<br />
* Speaker: Carl Pabo<br />
* Time: 12:00<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Human Thought and the Human Future<br />
* Abstract:<br />
<br />
=== 2016/17 academic year ===<br />
<br />
'''Sept. 7, 2016'''<br />
* Speaker: Dan Stowell<br />
* Time: 12:00<br />
* Affiliation: Queen Mary, University of London<br />
* Host: Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 8, 2016'''<br />
* Speaker: Barb Finlay<br />
* Time: 12:00<br />
* Affiliation: Cornell Univ<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 27, 2016'''<br />
* Speaker: Yoshua Bengio<br />
* Time: 11:00<br />
* Affiliation: Univ Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Oct. 12, 2016'''<br />
* Speaker: Paul Rhodes<br />
* Time: 4:00<br />
* Affiliation: Specific Technologies<br />
* Host: Dylan/Bruno<br />
* Status: confirmed<br />
* Title: A novel and important problem in spatiotemporal pattern classification<br />
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.<br />
<br />
'''Oct. 25, 2016'''<br />
* Speaker: Douglas L. Jones<br />
* Time: 2:00<br />
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Optimal energy-efficient coding in sensory neurons<br />
* Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.<br />
<br />
'''October 26, 2016'''<br />
* Speaker: Eric Jonas<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Charles Frye<br />
* Status: confirmed<br />
* Title: Could a neuroscientist understand a microprocessor?<br />
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally. <br />
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).<br />
<br />
'''Nov. 9, 2016'''<br />
* Speaker: Pulkit Agrawal<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''Nov. 16, 2016'''<br />
* Speaker: Sebastian Musslick<br />
* Time: 12:00<br />
* Affiliation: Princeton Neuroscience Institute (Princeton University)<br />
* Host: Brian Cheung<br />
* Status: confirmed<br />
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures<br />
* Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.<br />
<br />
'''Nov 30, 2016'''<br />
* Speaker: Marcus Rohrbach<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 1st, 2017'''<br />
* Speaker: Sahar Akram<br />
* Time: 12:00<br />
* Affiliation: Starkey Hearing Research Center <br />
* Host: Shariq<br />
* Status: Confirmed<br />
* Title: Real-Time & Adaptive Auditory Neural Processing<br />
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.<br />
<br />
'''Mar 2, 2017'''<br />
* Speaker: Joszef Fiser<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Mar 22, 2017'''<br />
* Speaker: Michael Frank<br />
* Time: 12:00<br />
* Affiliation: Magicore Systems<br />
* Host: Dylan<br />
* Status: Confirmed<br />
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks<br />
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.<br />
<br />
'''April 12, 2017'''<br />
* Speaker: Aapo Hyvarinen<br />
* Time: 12:00<br />
* Affiliation: Gatsby/UCL<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 24, 2017'''<br />
* Speaker: Pierre Sermanet<br />
* Time: 12:00<br />
* Affiliation: Google Brain<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 30, 2017'''<br />
* Speaker: Heiko Schutt<br />
* Time: 12:00<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 7, 2017'''<br />
* Speaker: Saurabh Gupta<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Spencer<br />
* Status: confirmed<br />
* Title: Cognitive Mapping and Planning for Visual Navigation<br />
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.<br />
<br />
'''June 14, 2017'''<br />
* Speaker: Madhow<br />
* Time: 12:00<br />
* Affiliation: UCSB<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 19, 2017'''<br />
* Speaker: Tali Tishby<br />
* Time: 12:00<br />
* Affiliation: Hebrew Univ.<br />
* Host: Bruno/Daniel Reichman<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 21, 2017'''<br />
* Speaker: Jasmine Collins<br />
* Time: 12:00<br />
* Affiliation: Google<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: Capacity and Trainability in Recurrent Neural Networks <br />
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.<br />
<br />
=== 2015/16 academic year ===<br />
<br />
'''July 21, 2015'''<br />
* Speaker: Felix Effenberger<br />
* Affiliation: <br />
* Host: Chris H.<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 22, 2015'''<br />
* Speaker: Lav Varshney<br />
* Affiliation: Urbana-Champaign<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 23, 2015'''<br />
* Speaker: Xuemin Wei<br />
* Affiliation: Univ Penn<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 29, 2015'''<br />
* Speaker: Gonzalo Otazu<br />
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY<br />
* Host: Mike D<br />
* Status: Confirmed<br />
* Title: The Role of Cortical Feedback in Olfactory Processing<br />
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.<br />
<br />
'''Aug 19, 2015'''<br />
* Speaker: Wujie Zhang<br />
* Affiliation: Columbia<br />
* Host: Bruno/Michael Yartsev<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept 2, 2015'''<br />
* Speaker: Jeremy Maitin-Shepard<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Combinatorial Energy Learning for Image Segmentation<br />
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.<br />
<br />
'''Sept 8, 2015'''<br />
* Speaker: Jennifer Hasler<br />
* Affiliation: Georgia Tech<br />
* Host: Bruno/Mika<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''October 29, 2015'''<br />
* Speaker: Garrett Kenyon<br />
* Affiliation: Los Alamos National Laboratory<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: A Deconvolutional Competitive Algorithm (DCA)<br />
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.<br />
<br />
'''Nov 18, 2015'''<br />
* Speaker: Hillel Adesnik<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Nov 17, 2015'''<br />
* Speaker: Manuel Lopez<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''Dec 2, 2015'''<br />
* Speaker: Steven Brumby<br />
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: Seeing the Earth in the Cloud<br />
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. <br />
<br />
'''Dec 14, 2015'''<br />
* Speaker: Bill Softky <br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Screen addition - informal Redwood group seminar<br />
<br />
'''Dec 16, 2015'''<br />
* Speaker: Mike Landy<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 3, 2016'''<br />
* Speaker: Ping-Chen Huang<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 17, 2016'''<br />
* Speaker: Andrew Saxe<br />
* Affiliation: Harvard<br />
* Host: Jesse<br />
* Status: confirmed<br />
* Title: Hallmarks of Deep Learning in the Brain<br />
<br />
'''Feb 24, 2016'''<br />
* Speaker: Miguel Perpinan<br />
* Affiliation: UC Merced<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
<br />
'''Mar 1, 2016'''<br />
* Speaker: Leon Gatys<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Mar 7-9, 2016'''<br />
* NICE workshop<br />
<br />
'''Mar 9, 2016'''<br />
* Tatiana Engel - HWNI job talk at 12:00<br />
<br />
'''Mar 16, 2016'''<br />
* Talia Lerner - HWNI job talk at 12:00<br />
<br />
'''Mar 23, 2016'''<br />
* Speaker: Kwabena Boahen<br />
* Affiliation: Stanford<br />
* Host: Max Kanwal/Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''April 11, 2016'''<br />
* Speaker: Hao Su<br />
* Time: at 12:00<br />
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University<br />
* Host: Yubei<br />
* Status: confirmed<br />
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes<br />
* Abstract: Coming<br />
<br />
'''May 04, 2016'''<br />
* Speaker: Zhengya Zhang<br />
* Time: 12:00<br />
* Affiliation: Electrical Engineering and Computer Science, University of Michigan<br />
* Host: Dylan, Bruno<br />
* Status: Confirmed<br />
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification<br />
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.<br />
<br />
'''May 18, 2016'''<br />
* Speaker: Melanie Mitchell<br />
* Affiliation: Portland State University and Santa Fe Institute<br />
* Host: Dylan<br />
* Time: 12:00<br />
* Status: confirmed<br />
* Title: Using Analogy to Recognize Visual Situations<br />
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. <br />
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.<br />
<br />
'''June 8, 2016'''<br />
* Speaker: Kris Bouchard<br />
* Time: 12:00<br />
* Affiliation: LBNL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: The union of intersections method<br />
* Abstract:<br />
<br />
'''June 15, 2016'''<br />
* Speaker: James Blackmon<br />
* Time: 12:00<br />
* Affiliation: San Francisco State University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'''June 24, 2015'''<br />
* Speaker: Kendrick Kay<br />
* Affiliation: Department of Psychology, Washington University in St. Louis<br />
* Host: Karl<br />
* Status: Confirmed<br />
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system<br />
* Abstract<br />
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=8874Seminars2017-07-17T19:04:13Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
<br />
'''July 10, 2017'''<br />
* Speaker: David Field<br />
* Time: 6:00pm<br />
* Affiliation: Cornell<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''July 18, 2017'''<br />
* Speaker: Jordi Puigbò<br />
* Time: 12:30<br />
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)<br />
* Host: Vasha<br />
* Status: Confirmed<br />
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning<br />
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.<br />
<br />
'''Wed. July 19, 2017'''<br />
* Speaker: Evangelos Theodorou<br />
* Time: 12:30<br />
* Affiliation: GeorgiaTech<br />
* Host: Mike/Dibyendu Mandal<br />
* Status: confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''TBD, Week of September 25th 2017'''<br />
* Speaker: Caleb Kalmere<br />
* Time: 12:00<br />
* Affiliation: Rice<br />
* Host: Guy Isely/Sean MacKesey<br />
* Status: planning<br />
* Title: TBD<br />
* Abstract: TBD-- HMM-based hippocampal replay<br />
<br />
<br />
'''TBD, 2016'''<br />
* Speaker: Alexander Stubbs<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Michael Levy<br />
* Status: tentative<br />
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?<br />
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.<br />
<br />
'''Sept. 4 or 11, 2017'''<br />
* Speaker: Carl Pabo<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2016/17 academic year ===<br />
<br />
'''Sept. 7, 2016'''<br />
* Speaker: Dan Stowell<br />
* Time: 12:00<br />
* Affiliation: Queen Mary, University of London<br />
* Host: Frederic Theunissen<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 8, 2016'''<br />
* Speaker: Barb Finlay<br />
* Time: 12:00<br />
* Affiliation: Cornell Univ<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept. 27, 2016'''<br />
* Speaker: Yoshua Bengio<br />
* Time: 11:00<br />
* Affiliation: Univ Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Oct. 12, 2016'''<br />
* Speaker: Paul Rhodes<br />
* Time: 4:00<br />
* Affiliation: Specific Technologies<br />
* Host: Dylan/Bruno<br />
* Status: confirmed<br />
* Title: A novel and important problem in spatiotemporal pattern classification<br />
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.<br />
<br />
'''Oct. 25, 2016'''<br />
* Speaker: Douglas L. Jones<br />
* Time: 2:00<br />
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Optimal energy-efficient coding in sensory neurons<br />
* Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.<br />
<br />
'''October 26, 2016'''<br />
* Speaker: Eric Jonas<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Charles Frye<br />
* Status: confirmed<br />
* Title: Could a neuroscientist understand a microprocessor?<br />
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally. <br />
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).<br />
<br />
'''Nov. 9, 2016'''<br />
* Speaker: Pulkit Agrawal<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''Nov. 16, 2016'''<br />
* Speaker: Sebastian Musslick<br />
* Time: 12:00<br />
* Affiliation: Princeton Neuroscience Institute (Princeton University)<br />
* Host: Brian Cheung<br />
* Status: confirmed<br />
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures<br />
* Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.<br />
<br />
'''Nov 30, 2016'''<br />
* Speaker: Marcus Rohrbach<br />
* Time: 12:00<br />
* Affiliation: EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 1st, 2017'''<br />
* Speaker: Sahar Akram<br />
* Time: 12:00<br />
* Affiliation: Starkey Hearing Research Center <br />
* Host: Shariq<br />
* Status: Confirmed<br />
* Title: Real-Time & Adaptive Auditory Neural Processing<br />
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.<br />
<br />
'''Mar 2, 2017'''<br />
* Speaker: Joszef Fiser<br />
* Time: 12:00<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Mar 22, 2017'''<br />
* Speaker: Michael Frank<br />
* Time: 12:00<br />
* Affiliation: Magicore Systems<br />
* Host: Dylan<br />
* Status: Confirmed<br />
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks<br />
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.<br />
<br />
'''April 12, 2017'''<br />
* Speaker: Aapo Hyvarinen<br />
* Time: 12:00<br />
* Affiliation: Gatsby/UCL<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 24, 2017'''<br />
* Speaker: Pierre Sermanet<br />
* Time: 12:00<br />
* Affiliation: Google Brain<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''May 30, 2017'''<br />
* Speaker: Heiko Schutt<br />
* Time: 12:00<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 7, 2017'''<br />
* Speaker: Saurabh Gupta<br />
* Time: 12:00<br />
* Affiliation: UC Berkeley<br />
* Host: Spencer<br />
* Status: confirmed<br />
* Title: Cognitive Mapping and Planning for Visual Navigation<br />
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.<br />
<br />
'''June 14, 2017'''<br />
* Speaker: Madhow<br />
* Time: 12:00<br />
* Affiliation: UCSB<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 19, 2017'''<br />
* Speaker: Tali Tishby<br />
* Time: 12:00<br />
* Affiliation: Hebrew Univ.<br />
* Host: Bruno/Daniel Reichman<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''June 21, 2017'''<br />
* Speaker: Jasmine Collins<br />
* Time: 12:00<br />
* Affiliation: Google<br />
* Host: Brian<br />
* Status: confirmed<br />
* Title: Capacity and Trainability in Recurrent Neural Networks <br />
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.<br />
<br />
=== 2015/16 academic year ===<br />
<br />
'''July 21, 2015'''<br />
* Speaker: Felix Effenberger<br />
* Affiliation: <br />
* Host: Chris H.<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 22, 2015'''<br />
* Speaker: Lav Varshney<br />
* Affiliation: Urbana-Champaign<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 23, 2015'''<br />
* Speaker: Xuemin Wei<br />
* Affiliation: Univ Penn<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''July 29, 2015'''<br />
* Speaker: Gonzalo Otazu<br />
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY<br />
* Host: Mike D<br />
* Status: Confirmed<br />
* Title: The Role of Cortical Feedback in Olfactory Processing<br />
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.<br />
<br />
'''Aug 19, 2015'''<br />
* Speaker: Wujie Zhang<br />
* Affiliation: Columbia<br />
* Host: Bruno/Michael Yartsev<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''Sept 2, 2015'''<br />
* Speaker: Jeremy Maitin-Shepard<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Combinatorial Energy Learning for Image Segmentation<br />
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.<br />
<br />
'''Sept 8, 2015'''<br />
* Speaker: Jennifer Hasler<br />
* Affiliation: Georgia Tech<br />
* Host: Bruno/Mika<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''October 29, 2015'''<br />
* Speaker: Garrett Kenyon<br />
* Affiliation: Los Alamos National Laboratory<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: A Deconvolutional Competitive Algorithm (DCA)<br />
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.<br />
<br />
'''Nov 18, 2015'''<br />
* Speaker: Hillel Adesnik<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Nov 17, 2015'''<br />
* Speaker: Manuel Lopez<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract<br />
<br />
'''Dec 2, 2015'''<br />
* Speaker: Steven Brumby<br />
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]<br />
* Host: Dylan<br />
* Status: confirmed<br />
* Title: Seeing the Earth in the Cloud<br />
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. <br />
<br />
'''Dec 14, 2015'''<br />
* Speaker: Bill Softky <br />
* Affiliation:<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Screen addition - informal Redwood group seminar<br />
<br />
'''Dec 16, 2015'''<br />
* Speaker: Mike Landy<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 3, 2016'''<br />
* Speaker: Ping-Chen Huang<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Feb 17, 2016'''<br />
* Speaker: Andrew Saxe<br />
* Affiliation: Harvard<br />
* Host: Jesse<br />
* Status: confirmed<br />
* Title: Hallmarks of Deep Learning in the Brain<br />
<br />
'''Feb 24, 2016'''<br />
* Speaker: Miguel Perpinan<br />
* Affiliation: UC Merced<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
<br />
'''Mar 1, 2016'''<br />
* Speaker: Leon Gatys<br />
* Affiliation: Univ Tubingen<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''Mar 7-9, 2016'''<br />
* NICE workshop<br />
<br />
'''Mar 9, 2016'''<br />
* Tatiana Engel - HWNI job talk at 12:00<br />
<br />
'''Mar 16, 2016'''<br />
* Talia Lerner - HWNI job talk at 12:00<br />
<br />
'''Mar 23, 2016'''<br />
* Speaker: Kwabena Boahen<br />
* Affiliation: Stanford<br />
* Host: Max Kanwal/Bruno<br />
* Status: confirmed<br />
* Title:<br />
<br />
'''April 11, 2016'''<br />
* Speaker: Hao Su<br />
* Time: at 12:00<br />
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University<br />
* Host: Yubei<br />
* Status: confirmed<br />
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes<br />
* Abstract: Coming<br />
<br />
'''May 04, 2016'''<br />
* Speaker: Zhengya Zhang<br />
* Time: 12:00<br />
* Affiliation: Electrical Engineering and Computer Science, University of Michigan<br />
* Host: Dylan, Bruno<br />
* Status: Confirmed<br />
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification<br />
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.<br />
<br />
'''May 18, 2016'''<br />
* Speaker: Melanie Mitchell<br />
* Affiliation: Portland State University and Santa Fe Institute<br />
* Host: Dylan<br />
* Time: 12:00<br />
* Status: confirmed<br />
* Title: Using Analogy to Recognize Visual Situations<br />
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. <br />
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.<br />
<br />
'''June 8, 2016'''<br />
* Speaker: Kris Bouchard<br />
* Time: 12:00<br />
* Affiliation: LBNL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: The union of intersections method<br />
* Abstract:<br />
<br />
'''June 15, 2016'''<br />
* Speaker: James Blackmon<br />
* Time: 12:00<br />
* Affiliation: San Francisco State University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'''June 24, 2015'''<br />
* Speaker: Kendrick Kay<br />
* Affiliation: Department of Psychology, Washington University in St. Louis<br />
* Host: Karl<br />
* Status: Confirmed<br />
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system<br />
* Abstract<br />
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=TCN_Paper_Ideas&diff=8441TCN Paper Ideas2016-01-08T21:02:59Z<p>Gisely: /* Spring 2016 */</p>
<hr />
<div>Post ideas about interesting papers to read below. I<br />
<br />
==Spring 2016==<br />
<br />
Ideas from the Nando Fretas AMA:<br />
<br />
* Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1] <br />
* Pointer networks, http://arxiv.org/abs/1506.03134[3]<br />
* Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]<br />
* Learning to see by moving, http://arxiv.org/abs/1505.01596[5]<br />
* Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]<br />
* Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]<br />
* Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]<br />
* Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]<br />
* Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]<br />
* Hippocampal place cells construct reward related sequences through unexplored space<br />
* Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]<br />
<br />
Vijay Mohan a post-doc from UNC generously put together this reading list for me on computational models of neuromodulators. Haven't read them all yet, but looks like some good stuff and might be a good way to add some neuroscience to the mix to counterbalance all the deep learning.<br />
<br />
* [http://www.ncbi.nlm.nih.gov/pubmed/19346478 Learning reward timing in cortex through reward dependent expression of synaptic plasticity]<br />
* [http://www.cell.com/cell/abstract/S0092-8674%2815%2900973-3 Central Cholinergic Neurons Are Rapidly Recruited by Reinforcement Feedback]<br />
* [http://www.sciencedirect.com/science/article/pii/S0960982215004790 Selective Activation of a Putative Reinforcement Signal Conditions Cued Interval Timing in Primary Visual Cortex]<br />
* [http://www.sciencedirect.com/science/article/pii/S0896627305003624 Uncertainty, Neuromodulation, and Attention]<br />
* [http://www.gatsby.ucl.ac.uk/~dayan/papers/25lessons.pdf Twenty-Five Lessons from Computational Neuromodulation]</div>Giselyhttps://rctn.org/w/index.php?title=TCN_Paper_Ideas&diff=8440TCN Paper Ideas2016-01-08T20:59:52Z<p>Gisely: /* Spring 2016 */</p>
<hr />
<div>Post ideas about interesting papers to read below. I<br />
<br />
==Spring 2016==<br />
<br />
Ideas from the Nando Fretas AMA:<br />
<br />
* Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1] <br />
* Pointer networks, http://arxiv.org/abs/1506.03134[3]<br />
* Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]<br />
* Learning to see by moving, http://arxiv.org/abs/1505.01596[5]<br />
* Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]<br />
* Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]<br />
* Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]<br />
* Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]<br />
* Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]<br />
* Hippocampal place cells construct reward related sequences through unexplored space<br />
* Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]<br />
<br />
Vijay Mohan a post-doc from UNC generously put together this reading list for me on computational models of neuromodulators. Haven't read them all yet, but looks like some good stuff and might be a good way to add some neuroscience to the mix to counterbalance all the deep learning.<br />
<br />
* [http://www.ncbi.nlm.nih.gov/pubmed/19346478 Learning reward timing in cortex through reward dependent expression of synaptic plasticity]</div>Giselyhttps://rctn.org/w/index.php?title=TCN_Paper_Ideas&diff=8439TCN Paper Ideas2016-01-08T20:59:13Z<p>Gisely: /* Spring 2016 */</p>
<hr />
<div>Post ideas about interesting papers to read below. I<br />
<br />
==Spring 2016==<br />
<br />
Ideas from the Nando Fretas AMA:<br />
<br />
* Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1] <br />
* Pointer networks, http://arxiv.org/abs/1506.03134[3]<br />
* Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]<br />
* Learning to see by moving, http://arxiv.org/abs/1505.01596[5]<br />
* Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]<br />
* Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]<br />
* Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]<br />
* Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]<br />
* Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]<br />
* Hippocampal place cells construct reward related sequences through unexplored space<br />
* Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]<br />
<br />
Vijay Mohan a post-doc from UNC generously put together this reading list for me on computational models of neuromodulators. Haven't read them all yet, but looks like some good stuff and might be a good way to add some neuroscience to the mix to counterbalance all the deep learning.<br />
<br />
* [Learning reward timing in cortex through reward dependent expression of synaptic plasticity | http://www.ncbi.nlm.nih.gov/pubmed/19346478]</div>Giselyhttps://rctn.org/w/index.php?title=TCN_Paper_Ideas&diff=8438TCN Paper Ideas2016-01-08T20:57:30Z<p>Gisely: /* Spring 2016 */</p>
<hr />
<div>Post ideas about interesting papers to read below. I<br />
<br />
==Spring 2016==<br />
<br />
Ideas from the Nando Fretas AMA:<br />
<br />
* Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1] <br />
* Pointer networks, http://arxiv.org/abs/1506.03134[3]<br />
* Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]<br />
* Learning to see by moving, http://arxiv.org/abs/1505.01596[5]<br />
* Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]<br />
* Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]<br />
* Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]<br />
* Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]<br />
* Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]<br />
* Hippocampal place cells construct reward related sequences through unexplored space<br />
* Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]<br />
<br />
Vijay Mohan a post-doc from UNC generously put together this reading list for me on computational models of neuromodulators. Haven't read them all yet, but looks like some good stuff and might be a good way to add some neuroscience to the mix to counterbalance all the deep learning.<br />
<br />
* Learning reward timing in cortex through reward dependent expression of synaptic plasticity</div>Giselyhttps://rctn.org/w/index.php?title=TCN&diff=8437TCN2016-01-05T20:35:18Z<p>Gisely: /* Fall 2015 */</p>
<hr />
<div>== Topics in Computational Neuroscience ==<br />
<br />
For ideas about some interesting papers to discuss have look here [[TCN Paper Ideas]]<br />
<br />
===Spring 2016===<br />
[Feb 04] <font color="#ff0000">Sean Mackesey</font><br />
<br />
[Jan 28] <font color="#ff0000">Dylan Paiton</font> - R Goroshin, J Bruna, J Tompson, D Eigen, Y LeCun (2015). Unsupervised Learning of Spatiotemporally Coherent Metrics [http://arxiv.org/pdf/1412.6056.pdf] <br />
<br />
[Jan 21] <font color="#ff0000">Eric Weiss</font><br />
<br />
[Jan 14] <font color="#ff0000">Daniel Toker</font> - AK Seth, AB Barrett, L Barnett (2011). Causal Density and Integrated Information as Measures of Conscious Level [http://rsta.royalsocietypublishing.org/content/369/1952/3748.short]<br />
<br />
===Fall 2015===<br />
<br />
[Dec 17] <font color="#ff0000">Charles Frye</font> - BM Lake, R Salakhutdinov, JB Tenenbaum (2015). Human-Level Concept Learning Through Probabilistic Program Induction [http://www.sciencemag.org/content/350/6266/1332.abstract]<br />
<br />
[Dec 03] <font color="#ff0000">Omer Hazon</font> - D Soudry, I Hubara, R Meir (2014). Expectation Backpropagation [http://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights.pdf]<br />
<br />
[Nov 26] Thanksgiving break<br />
<br />
[Nov 19] <font color="#ff0000">Eric Dodds</font> - EC Smith, MS Lewicki (2006). Efficient Auditory Coding [http://www.nature.com/nature/journal/v439/n7079/full/nature04485.html]<br />
<br />
Nov 12] <font color="#ff0000">Vasha Dutell</font> - H Hosoya, A Hyvarinen (2015). A Hierarchical Statistical Model of Natural Images Explains Tuning Properties in V2 [http://www.jneurosci.org/content/35/29/10412.full]<br />
<br />
=== Summer 2014 ===<br />
<br />
* [June 19] Buzsaki & Mizuseki (2014). The log-dynamic brain: how skewed distributions affect network operations. [http://buzsakilab.com/content/PDFs/Mizuseki2014.pdf]<br />
<br />
* [June 12] Hukushima & Nemoto (1996). Exchange Monte Carlo method and application to spin glass simulations. [http://arxiv.org/pdf/cond-mat/9512035v1.pdf]<br />
<br />
* [June 5] Shi & Griffiths (2009). Neural implementation of hierarchical bayesian inference by importance sampling. [http://cocosci.berkeley.edu/tom/papers/neuralIS.pdf]<br />
<br />
* [May 29] Petersen & Crochet (2013). Synaptic computation and sensory processing in neocortical layer 2/3. [http://www.sciencedirect.com/science/article/pii/S0896627313002675]<br />
<br />
* [May 22] Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925-933 [http://www.neurobio.ucla.edu/~dbuono/PDFs/LajeBuonomano_NatNeurosci_13.pdf]<br />
<br />
=== Spring 2014 ===<br />
<br />
* [Jan 20] Sutskever 2012- Training Recurrent Neural Networks. [http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf]<br />
<br />
=== Fall 2013 ===<br />
<br />
* [Sep 18] Guillery & Sherman 2010 - Branched thalamic afferents: What are the messages that they relay to the cortex? [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657838/#!po=22.7273]<br />
<br />
=== Summer 2013 ===<br />
<br />
* [July 10] Curto & Itskov 2008 - Cell Groups Reveal Structure of Stimulus Space [http://www.math.unl.edu/~ccurto2/papers/cell-groups.pdf]<br />
<br />
=== Spring 2013 ===<br />
<br />
* [Apr 8] Burak et al. 2009 - Accurate Path Integration in Continuous Attractor Network Models of Grid Cells [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=Y&aulast=Burak&atitle=Accurate+path+integration+in+continuous+attractor+network+models+of+grid+cells&id=doi:10.1371/journal.pcbi.1000291&title=PLoS+Computational+Biology&volume=5&issue=2&date=2009&spage=e1000291&issn=1553-734X] [http://www.jneurosci.org/content/16/6/2112.full.pdf]<br />
<br />
* [Mar 27] Sreenivasan et al. 2011 - Grid cells generate an analog error-correcting code for singularly precise neural computation. [http://www.clm.utexas.edu/fietelab/Papers/nn.2901.pdf] <br />
<br />
* [Mar 20] Killian et al. - A map of visual space in the primate entorhinal cortex [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=NJ&aulast=Killian&atitle=A+map+of+visual+space+in+the+primate+entorhinal+cortex&id=doi:10.1038/nature11587&title=Nature&volume=491&issue=7426&date=2012&spage=761&issn=0028-0836]<br />
<br />
* [Mar 13] Doyle et al. 2011 - Architecture, constraints and behavior [http://www.pnas.org/content/108/suppl.3/15624.full.pdf+html]<br />
<br />
* [Mar 6] Grady 2006 - Random Walks for Image Segmentation [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1704833&tag=1]<br />
<br />
* [Feb 27] Todorov 2012 - Parallels between sensory and motor information processing [http://scholar.google.com/scholar?hl=en&q=Parallels+between+sensory+and+motor+information+processing&btnG=&as_sdt=1%2C5&as_sdtp=]<br />
<br />
* [Feb 13] Sohl-Dickstein 2012 - The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use [http://arxiv.org/pdf/1205.1828v1.pdf]<br />
<br />
* [Feb 06] Girosi 1998 - An Equivalence Between Sparse Approximation and Support Vector Machines [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=F&aulast=Girosi&atitle=An+equivalence+between+sparse+approximation+and+support+vector+machines&id=doi:10.1162/089976698300017269&title=Neural+computation&volume=10&issue=6&date=1998&spage=1455&issn=0899-7667][http://18.7.29.232/bitstream/handle/1721.1/7289/AIM-1606.pdf?sequence=2]<br />
<br />
* [Jan 30] Zipser et al. 1996 - Contextual Modulation in Primary Visual Cortex [http://www.jneurosci.org/content/16/22/7376.full.pdf+html]<br />
Ayzenshtat et al. 2012 - Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing [http://www.jneurosci.org/content/32/40/13971.full.pdf+html]<br />
<br />
* [Jan 23] Gillenwater et al. 2012 - Near-Optimal MAP Inference for Determinantal Point Processes [http://books.nips.cc/papers/files/nips25/NIPS2012_1264.pdf] [http://web.eecs.umich.edu/~kulesza/pubs/thesis.pdf]<br />
<br />
* [Jan 08] Cathart-Harris et al. 2012 - Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin [http://www.pnas.org/content/109/6/2138.long]<br />
<br />
=== Fall 2012 ===<br />
<br />
* [Aug 22] Maass et al. - Liquid State Computing [http://www.igi.tugraz.at/maass/psfiles/130.pdf] [http://www.igi.tugraz.at/maass/psfiles/186.pdf]<br />
<br />
* [Aug 29] Salakhutdinov & Hinton 2012 - An Efficient Learning Procedure for Deep Boltzmann Machines [http://www.mitpressjournals.org/doi/pdfplus/10.1162/NECO_a_00311]<br />
<br />
* [Sep 05] Coates & Ng 2011 - An Analysis of Single-Layer Networks in Unsupervised Feature Learning [http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf]<br />
<br />
* [Sep 12] Quiroga 2012 - Concept Cells: The Building Blocks of Declarative Memory Functions [http://www.phys.psu.edu/~collins/RNI/Quian_Quiroga_concept_cell_review_2012.pdf]<br />
<br />
* [Oct 10] Moira & Bialek 2011 - Are Biological Systems Poised at Critcality? [http://www.sns.ias.edu/pitp/2012files/Are_Biological_Systems.pdf]<br />
<br />
* [Oct 17] Newman 2005 -Power laws, Pareto distributions and Zipfʼs law. [http://www-personal.umich.edu/~mejn/courses/2006/cmplxsys899/powerlaws.pdf]<br />
<br />
* [Nov 28] Todorov 2004 -Optimality Principles in Sensorimotor Control. [http://www.nature.com/neuro/journal/v7/n9/pdf/nn1309.pdf]<br />
<br />
=== Spring 2011 ===<br />
<br />
* [Feb 10] Anastassiou et al. Ephaptic coupling of cortical neurons [http://www.nature.com/neuro/journal/v14/n2/full/nn.2727.html]<br />
* [Feb 10] Jin et al. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex [http://www.nature.com/neuro/journal/v14/n2/full/nn.2729.html]<br />
<br />
* [Jan 20] A review of NIPS 2010 papers. [http://books.nips.cc/nips23.html]<br />
<br />
=== Fall 2010 ===<br />
<br />
* [Dec 9] Welling. Herding algorithms. [http://www.ics.uci.edu/%7Ewelling/publications/publications.html]<br />
<br />
* [Dec 2] Neal. MCMC using Hamiltonian dynamics. [http://www.cs.utoronto.ca/~radford/ftp/ham-mcmc.ps]<br />
<br />
* [Nov 18] Mairal et al. Task-driven dictionary learning. [http://arxiv.org/abs/1009.5358]<br />
<br />
* [Oct 21] Hamed. Self-referential dynamical systems for the self-organization of behavior in robotic systems. Ch 2-3 of [http://robot.informatik.uni-leipzig.de/research/publications/2007/ThesisHamed.pdf] <br />
<br />
* [Oct 14] Hammond, Vandergheynst, and Gribonval. Wavelets on graphs via spectral graph theory. [http://dx.doi.org/10.1016/j.acha.2010.04.005]<br />
<br />
* [Oct 7] Neal. Annealed importance sampling. [http://www.springerlink.com/content/x40w6w4r1142651k/]<br />
<br />
* [Sep 30] Martius, Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. [http://www.springerlink.com/content/80013g12784261n3/]<br />
<br />
* [Sep 23] Bullier, Jean. "What is Fed Back?" in 23 Problems in Systems Neuroscience. [https://docs.google.com/fileview?id=0B30YK-ZtKoHXNGVlYjNlODUtNTYyOS00OTJmLWJjNWItNmUwNTljYmE3M2U4&hl=en&authkey=CMvu2LAM]<br />
<br />
=== [[Past TCN Papers]] ===<br />
<br />
=== Time and Location ===<br />
'''12:00-1:00pm''' every Thursday in the Redwood Center conference room (560 Evans). Please sign up to the email list (below) for announcements on meeting dates.<br />
<br />
=== Overview ===<br />
This journal club is aimed at graduate students from the neuroscience program, neuroscience related life sciences, as well as students from engineering, physics, and math programs with an interest in a computational approach to studying the brain. It provides a broad survey of literature from theoretical and computational neuroscience. Readings will combine both seminal works and recent theories. We meet for one session each week.<br />
<br />
It is possible to take this seminar for credit. If you would like to do so, please mention during journal club.<br />
<br />
If you have questions, please email the club organizer James Arnemann.<br />
<br />
=== E-mail List ===<br />
<br />
To subscribe to the journal club email list, visit [https://calmail.berkeley.edu/manage/list/listinfo/redwood_tcn@lists.berkeley.edu link]. You will receive emails twice a week about papers that will be covered in the next meeting.<br />
<br />
=== Guidelines for Presenting Papers ===<br />
Each person that selects a paper should present, in about 15-30 minutes:<br />
* an executive summary<br />
* an outline of the key points, ideas, or contributions<br />
* relevant background information<br />
* a description of the key figures<br />
* what you took away from the paper<br />
* some potential questions for discussion<br />
* you are encouraged to use whatever method to present (slides, puppets, etc.)<br />
<br />
===[[Suggestion Board]]===</div>Giselyhttps://rctn.org/w/index.php?title=TCN_Paper_Ideas&diff=8435TCN Paper Ideas2015-12-27T07:10:17Z<p>Gisely: /* Spring 2016 */</p>
<hr />
<div>Post ideas about interesting papers to read below. I<br />
<br />
==Spring 2016==<br />
<br />
Ideas from the Nando Fretas AMA:<br />
<br />
* Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1] <br />
* Pointer networks, http://arxiv.org/abs/1506.03134[3]<br />
* Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]<br />
* Learning to see by moving, http://arxiv.org/abs/1505.01596[5]<br />
* Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]<br />
* Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]<br />
* Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]<br />
* Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]<br />
* Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]<br />
* Hippocampal place cells construct reward related sequences through unexplored space<br />
* Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]</div>Giselyhttps://rctn.org/w/index.php?title=TCN_Paper_Ideas&diff=8434TCN Paper Ideas2015-12-27T07:09:48Z<p>Gisely: Created page with "Post ideas about interesting papers to read below. I ==Spring 2016== Ideas from the Nando Fretas AMA: * Teaching machines to read and comprehend, http://arxiv.org/abs/1506...."</p>
<hr />
<div>Post ideas about interesting papers to read below. I<br />
<br />
==Spring 2016==<br />
<br />
Ideas from the Nando Fretas AMA:<br />
<br />
* Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1] (can someone help me with how to add links to this properly???)<br />
* Spatial transformer networks, http://arxiv.org/abs/1506.02025[2]<br />
* Pointer networks, http://arxiv.org/abs/1506.03134[3]<br />
* Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]<br />
* Learning to see by moving, http://arxiv.org/abs/1505.01596[5]<br />
* Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]<br />
* Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]<br />
* Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]<br />
* Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]<br />
* Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]<br />
* Hippocampal place cells construct reward related sequences through unexplored space<br />
* Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]</div>Giselyhttps://rctn.org/w/index.php?title=TCN&diff=8433TCN2015-12-27T07:05:22Z<p>Gisely: /* Topics in Computational Neuroscience */</p>
<hr />
<div>== Topics in Computational Neuroscience ==<br />
<br />
For ideas about some interesting papers to discuss have look here [[TCN Paper Ideas]]<br />
<br />
===Spring 2016===<br />
[Feb 04] <font color="#ff0000">Sean Mackesey</font><br />
<br />
[Jan 28] <font color="#ff0000">Dylan Paiton</font> - R Goroshin, J Bruna, J Tompson, D Eigen, Y LeCun (2015). Unsupervised Learning of Spatiotemporally Coherent Metrics [http://arxiv.org/pdf/1412.6056.pdf] <br />
<br />
[Jan 21] <font color="#ff0000">Eric Weiss</font><br />
<br />
[Jan 14] <font color="#ff0000">Daniel Toker</font> - AK Seth, AB Barrett, L Barnett (2011). Causal Density and Integrated Information as Measures of Conscious Level [http://rsta.royalsocietypublishing.org/content/369/1952/3748.short]<br />
<br />
===Fall 2015===<br />
<br />
[Dec 17] <font color="#ff0000">Charles Frye</font> - BM Lake, R Salakhutdinov, JB Tenenbaum (2015). Human-Level Concept Learning Through Probabilistic Program Induction [http://www.sciencemag.org/content/350/6266/1332.abstract]<br />
<br />
[Dec 03] <font color="#ff0000">Guy Isley</font> - D Soudry, I Hubara, R Meir (2014). Expectation Backpropagation [http://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights.pdf]<br />
<br />
[Nov 26] Thanksgiving break<br />
<br />
[Nov 19] <font color="#ff0000">Eric Dodds</font> - EC Smith, MS Lewicki (2006). Efficient Auditory Coding [http://www.nature.com/nature/journal/v439/n7079/full/nature04485.html]<br />
<br />
Nov 12] <font color="#ff0000">Vasha Dutell</font> - H Hosoya, A Hyvarinen (2015). A Hierarchical Statistical Model of Natural Images Explains Tuning Properties in V2 [http://www.jneurosci.org/content/35/29/10412.full]<br />
<br />
=== Summer 2014 ===<br />
<br />
* [June 19] Buzsaki & Mizuseki (2014). The log-dynamic brain: how skewed distributions affect network operations. [http://buzsakilab.com/content/PDFs/Mizuseki2014.pdf]<br />
<br />
* [June 12] Hukushima & Nemoto (1996). Exchange Monte Carlo method and application to spin glass simulations. [http://arxiv.org/pdf/cond-mat/9512035v1.pdf]<br />
<br />
* [June 5] Shi & Griffiths (2009). Neural implementation of hierarchical bayesian inference by importance sampling. [http://cocosci.berkeley.edu/tom/papers/neuralIS.pdf]<br />
<br />
* [May 29] Petersen & Crochet (2013). Synaptic computation and sensory processing in neocortical layer 2/3. [http://www.sciencedirect.com/science/article/pii/S0896627313002675]<br />
<br />
* [May 22] Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925-933 [http://www.neurobio.ucla.edu/~dbuono/PDFs/LajeBuonomano_NatNeurosci_13.pdf]<br />
<br />
=== Spring 2014 ===<br />
<br />
* [Jan 20] Sutskever 2012- Training Recurrent Neural Networks. [http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf]<br />
<br />
=== Fall 2013 ===<br />
<br />
* [Sep 18] Guillery & Sherman 2010 - Branched thalamic afferents: What are the messages that they relay to the cortex? [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657838/#!po=22.7273]<br />
<br />
=== Summer 2013 ===<br />
<br />
* [July 10] Curto & Itskov 2008 - Cell Groups Reveal Structure of Stimulus Space [http://www.math.unl.edu/~ccurto2/papers/cell-groups.pdf]<br />
<br />
=== Spring 2013 ===<br />
<br />
* [Apr 8] Burak et al. 2009 - Accurate Path Integration in Continuous Attractor Network Models of Grid Cells [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=Y&aulast=Burak&atitle=Accurate+path+integration+in+continuous+attractor+network+models+of+grid+cells&id=doi:10.1371/journal.pcbi.1000291&title=PLoS+Computational+Biology&volume=5&issue=2&date=2009&spage=e1000291&issn=1553-734X] [http://www.jneurosci.org/content/16/6/2112.full.pdf]<br />
<br />
* [Mar 27] Sreenivasan et al. 2011 - Grid cells generate an analog error-correcting code for singularly precise neural computation. [http://www.clm.utexas.edu/fietelab/Papers/nn.2901.pdf] <br />
<br />
* [Mar 20] Killian et al. - A map of visual space in the primate entorhinal cortex [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=NJ&aulast=Killian&atitle=A+map+of+visual+space+in+the+primate+entorhinal+cortex&id=doi:10.1038/nature11587&title=Nature&volume=491&issue=7426&date=2012&spage=761&issn=0028-0836]<br />
<br />
* [Mar 13] Doyle et al. 2011 - Architecture, constraints and behavior [http://www.pnas.org/content/108/suppl.3/15624.full.pdf+html]<br />
<br />
* [Mar 6] Grady 2006 - Random Walks for Image Segmentation [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1704833&tag=1]<br />
<br />
* [Feb 27] Todorov 2012 - Parallels between sensory and motor information processing [http://scholar.google.com/scholar?hl=en&q=Parallels+between+sensory+and+motor+information+processing&btnG=&as_sdt=1%2C5&as_sdtp=]<br />
<br />
* [Feb 13] Sohl-Dickstein 2012 - The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use [http://arxiv.org/pdf/1205.1828v1.pdf]<br />
<br />
* [Feb 06] Girosi 1998 - An Equivalence Between Sparse Approximation and Support Vector Machines [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=F&aulast=Girosi&atitle=An+equivalence+between+sparse+approximation+and+support+vector+machines&id=doi:10.1162/089976698300017269&title=Neural+computation&volume=10&issue=6&date=1998&spage=1455&issn=0899-7667][http://18.7.29.232/bitstream/handle/1721.1/7289/AIM-1606.pdf?sequence=2]<br />
<br />
* [Jan 30] Zipser et al. 1996 - Contextual Modulation in Primary Visual Cortex [http://www.jneurosci.org/content/16/22/7376.full.pdf+html]<br />
Ayzenshtat et al. 2012 - Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing [http://www.jneurosci.org/content/32/40/13971.full.pdf+html]<br />
<br />
* [Jan 23] Gillenwater et al. 2012 - Near-Optimal MAP Inference for Determinantal Point Processes [http://books.nips.cc/papers/files/nips25/NIPS2012_1264.pdf] [http://web.eecs.umich.edu/~kulesza/pubs/thesis.pdf]<br />
<br />
* [Jan 08] Cathart-Harris et al. 2012 - Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin [http://www.pnas.org/content/109/6/2138.long]<br />
<br />
=== Fall 2012 ===<br />
<br />
* [Aug 22] Maass et al. - Liquid State Computing [http://www.igi.tugraz.at/maass/psfiles/130.pdf] [http://www.igi.tugraz.at/maass/psfiles/186.pdf]<br />
<br />
* [Aug 29] Salakhutdinov & Hinton 2012 - An Efficient Learning Procedure for Deep Boltzmann Machines [http://www.mitpressjournals.org/doi/pdfplus/10.1162/NECO_a_00311]<br />
<br />
* [Sep 05] Coates & Ng 2011 - An Analysis of Single-Layer Networks in Unsupervised Feature Learning [http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf]<br />
<br />
* [Sep 12] Quiroga 2012 - Concept Cells: The Building Blocks of Declarative Memory Functions [http://www.phys.psu.edu/~collins/RNI/Quian_Quiroga_concept_cell_review_2012.pdf]<br />
<br />
* [Oct 10] Moira & Bialek 2011 - Are Biological Systems Poised at Critcality? [http://www.sns.ias.edu/pitp/2012files/Are_Biological_Systems.pdf]<br />
<br />
* [Oct 17] Newman 2005 -Power laws, Pareto distributions and Zipfʼs law. [http://www-personal.umich.edu/~mejn/courses/2006/cmplxsys899/powerlaws.pdf]<br />
<br />
* [Nov 28] Todorov 2004 -Optimality Principles in Sensorimotor Control. [http://www.nature.com/neuro/journal/v7/n9/pdf/nn1309.pdf]<br />
<br />
=== Spring 2011 ===<br />
<br />
* [Feb 10] Anastassiou et al. Ephaptic coupling of cortical neurons [http://www.nature.com/neuro/journal/v14/n2/full/nn.2727.html]<br />
* [Feb 10] Jin et al. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex [http://www.nature.com/neuro/journal/v14/n2/full/nn.2729.html]<br />
<br />
* [Jan 20] A review of NIPS 2010 papers. [http://books.nips.cc/nips23.html]<br />
<br />
=== Fall 2010 ===<br />
<br />
* [Dec 9] Welling. Herding algorithms. [http://www.ics.uci.edu/%7Ewelling/publications/publications.html]<br />
<br />
* [Dec 2] Neal. MCMC using Hamiltonian dynamics. [http://www.cs.utoronto.ca/~radford/ftp/ham-mcmc.ps]<br />
<br />
* [Nov 18] Mairal et al. Task-driven dictionary learning. [http://arxiv.org/abs/1009.5358]<br />
<br />
* [Oct 21] Hamed. Self-referential dynamical systems for the self-organization of behavior in robotic systems. Ch 2-3 of [http://robot.informatik.uni-leipzig.de/research/publications/2007/ThesisHamed.pdf] <br />
<br />
* [Oct 14] Hammond, Vandergheynst, and Gribonval. Wavelets on graphs via spectral graph theory. [http://dx.doi.org/10.1016/j.acha.2010.04.005]<br />
<br />
* [Oct 7] Neal. Annealed importance sampling. [http://www.springerlink.com/content/x40w6w4r1142651k/]<br />
<br />
* [Sep 30] Martius, Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. [http://www.springerlink.com/content/80013g12784261n3/]<br />
<br />
* [Sep 23] Bullier, Jean. "What is Fed Back?" in 23 Problems in Systems Neuroscience. [https://docs.google.com/fileview?id=0B30YK-ZtKoHXNGVlYjNlODUtNTYyOS00OTJmLWJjNWItNmUwNTljYmE3M2U4&hl=en&authkey=CMvu2LAM]<br />
<br />
=== [[Past TCN Papers]] ===<br />
<br />
=== Time and Location ===<br />
'''12:00-1:00pm''' every Thursday in the Redwood Center conference room (560 Evans). Please sign up to the email list (below) for announcements on meeting dates.<br />
<br />
=== Overview ===<br />
This journal club is aimed at graduate students from the neuroscience program, neuroscience related life sciences, as well as students from engineering, physics, and math programs with an interest in a computational approach to studying the brain. It provides a broad survey of literature from theoretical and computational neuroscience. Readings will combine both seminal works and recent theories. We meet for one session each week.<br />
<br />
It is possible to take this seminar for credit. If you would like to do so, please mention during journal club.<br />
<br />
If you have questions, please email the club organizer James Arnemann.<br />
<br />
=== E-mail List ===<br />
<br />
To subscribe to the journal club email list, visit [https://calmail.berkeley.edu/manage/list/listinfo/redwood_tcn@lists.berkeley.edu link]. You will receive emails twice a week about papers that will be covered in the next meeting.<br />
<br />
=== Guidelines for Presenting Papers ===<br />
Each person that selects a paper should present, in about 15-30 minutes:<br />
* an executive summary<br />
* an outline of the key points, ideas, or contributions<br />
* relevant background information<br />
* a description of the key figures<br />
* what you took away from the paper<br />
* some potential questions for discussion<br />
* you are encouraged to use whatever method to present (slides, puppets, etc.)<br />
<br />
===[[Suggestion Board]]===</div>Giselyhttps://rctn.org/w/index.php?title=TCN&diff=8432TCN2015-12-27T07:03:30Z<p>Gisely: /* Topics in Computational Neuroscience */</p>
<hr />
<div>== Topics in Computational Neuroscience ==<br />
<br />
For ideas about some interesting papers to discuss have look here [TCN Paper Ideas]<br />
<br />
===Spring 2016===<br />
[Feb 04] <font color="#ff0000">Sean Mackesey</font><br />
<br />
[Jan 28] <font color="#ff0000">Dylan Paiton</font> - R Goroshin, J Bruna, J Tompson, D Eigen, Y LeCun (2015). Unsupervised Learning of Spatiotemporally Coherent Metrics [http://arxiv.org/pdf/1412.6056.pdf] <br />
<br />
[Jan 21] <font color="#ff0000">Eric Weiss</font><br />
<br />
[Jan 14] <font color="#ff0000">Daniel Toker</font> - AK Seth, AB Barrett, L Barnett (2011). Causal Density and Integrated Information as Measures of Conscious Level [http://rsta.royalsocietypublishing.org/content/369/1952/3748.short]<br />
<br />
===Fall 2015===<br />
<br />
[Dec 17] <font color="#ff0000">Charles Frye</font> - BM Lake, R Salakhutdinov, JB Tenenbaum (2015). Human-Level Concept Learning Through Probabilistic Program Induction [http://www.sciencemag.org/content/350/6266/1332.abstract]<br />
<br />
[Dec 03] <font color="#ff0000">Guy Isley</font> - D Soudry, I Hubara, R Meir (2014). Expectation Backpropagation [http://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights.pdf]<br />
<br />
[Nov 26] Thanksgiving break<br />
<br />
[Nov 19] <font color="#ff0000">Eric Dodds</font> - EC Smith, MS Lewicki (2006). Efficient Auditory Coding [http://www.nature.com/nature/journal/v439/n7079/full/nature04485.html]<br />
<br />
Nov 12] <font color="#ff0000">Vasha Dutell</font> - H Hosoya, A Hyvarinen (2015). A Hierarchical Statistical Model of Natural Images Explains Tuning Properties in V2 [http://www.jneurosci.org/content/35/29/10412.full]<br />
<br />
=== Summer 2014 ===<br />
<br />
* [June 19] Buzsaki & Mizuseki (2014). The log-dynamic brain: how skewed distributions affect network operations. [http://buzsakilab.com/content/PDFs/Mizuseki2014.pdf]<br />
<br />
* [June 12] Hukushima & Nemoto (1996). Exchange Monte Carlo method and application to spin glass simulations. [http://arxiv.org/pdf/cond-mat/9512035v1.pdf]<br />
<br />
* [June 5] Shi & Griffiths (2009). Neural implementation of hierarchical bayesian inference by importance sampling. [http://cocosci.berkeley.edu/tom/papers/neuralIS.pdf]<br />
<br />
* [May 29] Petersen & Crochet (2013). Synaptic computation and sensory processing in neocortical layer 2/3. [http://www.sciencedirect.com/science/article/pii/S0896627313002675]<br />
<br />
* [May 22] Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925-933 [http://www.neurobio.ucla.edu/~dbuono/PDFs/LajeBuonomano_NatNeurosci_13.pdf]<br />
<br />
=== Spring 2014 ===<br />
<br />
* [Jan 20] Sutskever 2012- Training Recurrent Neural Networks. [http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf]<br />
<br />
=== Fall 2013 ===<br />
<br />
* [Sep 18] Guillery & Sherman 2010 - Branched thalamic afferents: What are the messages that they relay to the cortex? [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657838/#!po=22.7273]<br />
<br />
=== Summer 2013 ===<br />
<br />
* [July 10] Curto & Itskov 2008 - Cell Groups Reveal Structure of Stimulus Space [http://www.math.unl.edu/~ccurto2/papers/cell-groups.pdf]<br />
<br />
=== Spring 2013 ===<br />
<br />
* [Apr 8] Burak et al. 2009 - Accurate Path Integration in Continuous Attractor Network Models of Grid Cells [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=Y&aulast=Burak&atitle=Accurate+path+integration+in+continuous+attractor+network+models+of+grid+cells&id=doi:10.1371/journal.pcbi.1000291&title=PLoS+Computational+Biology&volume=5&issue=2&date=2009&spage=e1000291&issn=1553-734X] [http://www.jneurosci.org/content/16/6/2112.full.pdf]<br />
<br />
* [Mar 27] Sreenivasan et al. 2011 - Grid cells generate an analog error-correcting code for singularly precise neural computation. [http://www.clm.utexas.edu/fietelab/Papers/nn.2901.pdf] <br />
<br />
* [Mar 20] Killian et al. - A map of visual space in the primate entorhinal cortex [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=NJ&aulast=Killian&atitle=A+map+of+visual+space+in+the+primate+entorhinal+cortex&id=doi:10.1038/nature11587&title=Nature&volume=491&issue=7426&date=2012&spage=761&issn=0028-0836]<br />
<br />
* [Mar 13] Doyle et al. 2011 - Architecture, constraints and behavior [http://www.pnas.org/content/108/suppl.3/15624.full.pdf+html]<br />
<br />
* [Mar 6] Grady 2006 - Random Walks for Image Segmentation [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1704833&tag=1]<br />
<br />
* [Feb 27] Todorov 2012 - Parallels between sensory and motor information processing [http://scholar.google.com/scholar?hl=en&q=Parallels+between+sensory+and+motor+information+processing&btnG=&as_sdt=1%2C5&as_sdtp=]<br />
<br />
* [Feb 13] Sohl-Dickstein 2012 - The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use [http://arxiv.org/pdf/1205.1828v1.pdf]<br />
<br />
* [Feb 06] Girosi 1998 - An Equivalence Between Sparse Approximation and Support Vector Machines [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=F&aulast=Girosi&atitle=An+equivalence+between+sparse+approximation+and+support+vector+machines&id=doi:10.1162/089976698300017269&title=Neural+computation&volume=10&issue=6&date=1998&spage=1455&issn=0899-7667][http://18.7.29.232/bitstream/handle/1721.1/7289/AIM-1606.pdf?sequence=2]<br />
<br />
* [Jan 30] Zipser et al. 1996 - Contextual Modulation in Primary Visual Cortex [http://www.jneurosci.org/content/16/22/7376.full.pdf+html]<br />
Ayzenshtat et al. 2012 - Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing [http://www.jneurosci.org/content/32/40/13971.full.pdf+html]<br />
<br />
* [Jan 23] Gillenwater et al. 2012 - Near-Optimal MAP Inference for Determinantal Point Processes [http://books.nips.cc/papers/files/nips25/NIPS2012_1264.pdf] [http://web.eecs.umich.edu/~kulesza/pubs/thesis.pdf]<br />
<br />
* [Jan 08] Cathart-Harris et al. 2012 - Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin [http://www.pnas.org/content/109/6/2138.long]<br />
<br />
=== Fall 2012 ===<br />
<br />
* [Aug 22] Maass et al. - Liquid State Computing [http://www.igi.tugraz.at/maass/psfiles/130.pdf] [http://www.igi.tugraz.at/maass/psfiles/186.pdf]<br />
<br />
* [Aug 29] Salakhutdinov & Hinton 2012 - An Efficient Learning Procedure for Deep Boltzmann Machines [http://www.mitpressjournals.org/doi/pdfplus/10.1162/NECO_a_00311]<br />
<br />
* [Sep 05] Coates & Ng 2011 - An Analysis of Single-Layer Networks in Unsupervised Feature Learning [http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf]<br />
<br />
* [Sep 12] Quiroga 2012 - Concept Cells: The Building Blocks of Declarative Memory Functions [http://www.phys.psu.edu/~collins/RNI/Quian_Quiroga_concept_cell_review_2012.pdf]<br />
<br />
* [Oct 10] Moira & Bialek 2011 - Are Biological Systems Poised at Critcality? [http://www.sns.ias.edu/pitp/2012files/Are_Biological_Systems.pdf]<br />
<br />
* [Oct 17] Newman 2005 -Power laws, Pareto distributions and Zipfʼs law. [http://www-personal.umich.edu/~mejn/courses/2006/cmplxsys899/powerlaws.pdf]<br />
<br />
* [Nov 28] Todorov 2004 -Optimality Principles in Sensorimotor Control. [http://www.nature.com/neuro/journal/v7/n9/pdf/nn1309.pdf]<br />
<br />
=== Spring 2011 ===<br />
<br />
* [Feb 10] Anastassiou et al. Ephaptic coupling of cortical neurons [http://www.nature.com/neuro/journal/v14/n2/full/nn.2727.html]<br />
* [Feb 10] Jin et al. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex [http://www.nature.com/neuro/journal/v14/n2/full/nn.2729.html]<br />
<br />
* [Jan 20] A review of NIPS 2010 papers. [http://books.nips.cc/nips23.html]<br />
<br />
=== Fall 2010 ===<br />
<br />
* [Dec 9] Welling. Herding algorithms. [http://www.ics.uci.edu/%7Ewelling/publications/publications.html]<br />
<br />
* [Dec 2] Neal. MCMC using Hamiltonian dynamics. [http://www.cs.utoronto.ca/~radford/ftp/ham-mcmc.ps]<br />
<br />
* [Nov 18] Mairal et al. Task-driven dictionary learning. [http://arxiv.org/abs/1009.5358]<br />
<br />
* [Oct 21] Hamed. Self-referential dynamical systems for the self-organization of behavior in robotic systems. Ch 2-3 of [http://robot.informatik.uni-leipzig.de/research/publications/2007/ThesisHamed.pdf] <br />
<br />
* [Oct 14] Hammond, Vandergheynst, and Gribonval. Wavelets on graphs via spectral graph theory. [http://dx.doi.org/10.1016/j.acha.2010.04.005]<br />
<br />
* [Oct 7] Neal. Annealed importance sampling. [http://www.springerlink.com/content/x40w6w4r1142651k/]<br />
<br />
* [Sep 30] Martius, Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. [http://www.springerlink.com/content/80013g12784261n3/]<br />
<br />
* [Sep 23] Bullier, Jean. "What is Fed Back?" in 23 Problems in Systems Neuroscience. [https://docs.google.com/fileview?id=0B30YK-ZtKoHXNGVlYjNlODUtNTYyOS00OTJmLWJjNWItNmUwNTljYmE3M2U4&hl=en&authkey=CMvu2LAM]<br />
<br />
=== [[Past TCN Papers]] ===<br />
<br />
=== Time and Location ===<br />
'''12:00-1:00pm''' every Thursday in the Redwood Center conference room (560 Evans). Please sign up to the email list (below) for announcements on meeting dates.<br />
<br />
=== Overview ===<br />
This journal club is aimed at graduate students from the neuroscience program, neuroscience related life sciences, as well as students from engineering, physics, and math programs with an interest in a computational approach to studying the brain. It provides a broad survey of literature from theoretical and computational neuroscience. Readings will combine both seminal works and recent theories. We meet for one session each week.<br />
<br />
It is possible to take this seminar for credit. If you would like to do so, please mention during journal club.<br />
<br />
If you have questions, please email the club organizer James Arnemann.<br />
<br />
=== E-mail List ===<br />
<br />
To subscribe to the journal club email list, visit [https://calmail.berkeley.edu/manage/list/listinfo/redwood_tcn@lists.berkeley.edu link]. You will receive emails twice a week about papers that will be covered in the next meeting.<br />
<br />
=== Guidelines for Presenting Papers ===<br />
Each person that selects a paper should present, in about 15-30 minutes:<br />
* an executive summary<br />
* an outline of the key points, ideas, or contributions<br />
* relevant background information<br />
* a description of the key figures<br />
* what you took away from the paper<br />
* some potential questions for discussion<br />
* you are encouraged to use whatever method to present (slides, puppets, etc.)<br />
<br />
===[[Suggestion Board]]===</div>Giselyhttps://rctn.org/w/index.php?title=TCN&diff=8408TCN2015-12-08T01:37:56Z<p>Gisely: /* Fall 2015 */</p>
<hr />
<div>== Topics in Computational Neuroscience ==<br />
<br />
===Spring 2016===<br />
[Jan 28] Sean Mackesey<br />
<br />
[Jan 21] Dylan Paiton - R Goroshin, J Bruna, J Tompson, D Eigen, Y LeCun (2015). Unsupervised Learning of Spatiotemporally Coherent Metrics [http://arxiv.org/pdf/1412.6056.pdf] <br />
<br />
[Jan 14] Daniel Toker<br />
<br />
===Fall 2015===<br />
<br />
[Dec 17] Mojtaba Sahraee<br />
<br />
[Dec 10] Eric Weiss<br />
<br />
[Dec 03] Guy Isley - [http://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights.pdf Expectation Backpropagation]. D Soudry et al 2014.<br />
<br />
[Nov 26] Thanksgiving break<br />
<br />
[Nov 19] Eric Dodds - EC Smith, MS Lewicki (2006). Efficient Auditory Coding [http://www.nature.com/nature/journal/v439/n7079/full/nature04485.html]<br />
<br />
Nov 12] Vasha Dutell - H Hosoya, A Hyvarinen (2015). A Hierarchical Statistical Model of Natural Images Explains Tuning Properties in V2 [http://www.jneurosci.org/content/35/29/10412.full]<br />
<br />
=== Summer 2014 ===<br />
<br />
* [June 19] Buzsaki & Mizuseki (2014). The log-dynamic brain: how skewed distributions affect network operations. [http://buzsakilab.com/content/PDFs/Mizuseki2014.pdf]<br />
<br />
* [June 12] Hukushima & Nemoto (1996). Exchange Monte Carlo method and application to spin glass simulations. [http://arxiv.org/pdf/cond-mat/9512035v1.pdf]<br />
<br />
* [June 5] Shi & Griffiths (2009). Neural implementation of hierarchical bayesian inference by importance sampling. [http://cocosci.berkeley.edu/tom/papers/neuralIS.pdf]<br />
<br />
* [May 29] Petersen & Crochet (2013). Synaptic computation and sensory processing in neocortical layer 2/3. [http://www.sciencedirect.com/science/article/pii/S0896627313002675]<br />
<br />
* [May 22] Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925-933 [http://www.neurobio.ucla.edu/~dbuono/PDFs/LajeBuonomano_NatNeurosci_13.pdf]<br />
<br />
=== Spring 2014 ===<br />
<br />
* [Jan 20] Sutskever 2012- Training Recurrent Neural Networks. [http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf]<br />
<br />
=== Fall 2013 ===<br />
<br />
* [Sep 18] Guillery & Sherman 2010 - Branched thalamic afferents: What are the messages that they relay to the cortex? [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657838/#!po=22.7273]<br />
<br />
=== Summer 2013 ===<br />
<br />
* [July 10] Curto & Itskov 2008 - Cell Groups Reveal Structure of Stimulus Space [http://www.math.unl.edu/~ccurto2/papers/cell-groups.pdf]<br />
<br />
=== Spring 2013 ===<br />
<br />
* [Apr 8] Burak et al. 2009 - Accurate Path Integration in Continuous Attractor Network Models of Grid Cells [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=Y&aulast=Burak&atitle=Accurate+path+integration+in+continuous+attractor+network+models+of+grid+cells&id=doi:10.1371/journal.pcbi.1000291&title=PLoS+Computational+Biology&volume=5&issue=2&date=2009&spage=e1000291&issn=1553-734X] [http://www.jneurosci.org/content/16/6/2112.full.pdf]<br />
<br />
* [Mar 27] Sreenivasan et al. 2011 - Grid cells generate an analog error-correcting code for singularly precise neural computation. [http://www.clm.utexas.edu/fietelab/Papers/nn.2901.pdf] <br />
<br />
* [Mar 20] Killian et al. - A map of visual space in the primate entorhinal cortex [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=NJ&aulast=Killian&atitle=A+map+of+visual+space+in+the+primate+entorhinal+cortex&id=doi:10.1038/nature11587&title=Nature&volume=491&issue=7426&date=2012&spage=761&issn=0028-0836]<br />
<br />
* [Mar 13] Doyle et al. 2011 - Architecture, constraints and behavior [http://www.pnas.org/content/108/suppl.3/15624.full.pdf+html]<br />
<br />
* [Mar 6] Grady 2006 - Random Walks for Image Segmentation [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1704833&tag=1]<br />
<br />
* [Feb 27] Todorov 2012 - Parallels between sensory and motor information processing [http://scholar.google.com/scholar?hl=en&q=Parallels+between+sensory+and+motor+information+processing&btnG=&as_sdt=1%2C5&as_sdtp=]<br />
<br />
* [Feb 13] Sohl-Dickstein 2012 - The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use [http://arxiv.org/pdf/1205.1828v1.pdf]<br />
<br />
* [Feb 06] Girosi 1998 - An Equivalence Between Sparse Approximation and Support Vector Machines [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=F&aulast=Girosi&atitle=An+equivalence+between+sparse+approximation+and+support+vector+machines&id=doi:10.1162/089976698300017269&title=Neural+computation&volume=10&issue=6&date=1998&spage=1455&issn=0899-7667][http://18.7.29.232/bitstream/handle/1721.1/7289/AIM-1606.pdf?sequence=2]<br />
<br />
* [Jan 30] Zipser et al. 1996 - Contextual Modulation in Primary Visual Cortex [http://www.jneurosci.org/content/16/22/7376.full.pdf+html]<br />
Ayzenshtat et al. 2012 - Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing [http://www.jneurosci.org/content/32/40/13971.full.pdf+html]<br />
<br />
* [Jan 23] Gillenwater et al. 2012 - Near-Optimal MAP Inference for Determinantal Point Processes [http://books.nips.cc/papers/files/nips25/NIPS2012_1264.pdf] [http://web.eecs.umich.edu/~kulesza/pubs/thesis.pdf]<br />
<br />
* [Jan 08] Cathart-Harris et al. 2012 - Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin [http://www.pnas.org/content/109/6/2138.long]<br />
<br />
=== Fall 2012 ===<br />
<br />
* [Aug 22] Maass et al. - Liquid State Computing [http://www.igi.tugraz.at/maass/psfiles/130.pdf] [http://www.igi.tugraz.at/maass/psfiles/186.pdf]<br />
<br />
* [Aug 29] Salakhutdinov & Hinton 2012 - An Efficient Learning Procedure for Deep Boltzmann Machines [http://www.mitpressjournals.org/doi/pdfplus/10.1162/NECO_a_00311]<br />
<br />
* [Sep 05] Coates & Ng 2011 - An Analysis of Single-Layer Networks in Unsupervised Feature Learning [http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf]<br />
<br />
* [Sep 12] Quiroga 2012 - Concept Cells: The Building Blocks of Declarative Memory Functions [http://www.phys.psu.edu/~collins/RNI/Quian_Quiroga_concept_cell_review_2012.pdf]<br />
<br />
* [Oct 10] Moira & Bialek 2011 - Are Biological Systems Poised at Critcality? [http://www.sns.ias.edu/pitp/2012files/Are_Biological_Systems.pdf]<br />
<br />
* [Oct 17] Newman 2005 -Power laws, Pareto distributions and Zipfʼs law. [http://www-personal.umich.edu/~mejn/courses/2006/cmplxsys899/powerlaws.pdf]<br />
<br />
* [Nov 28] Todorov 2004 -Optimality Principles in Sensorimotor Control. [http://www.nature.com/neuro/journal/v7/n9/pdf/nn1309.pdf]<br />
<br />
=== Spring 2011 ===<br />
<br />
* [Feb 10] Anastassiou et al. Ephaptic coupling of cortical neurons [http://www.nature.com/neuro/journal/v14/n2/full/nn.2727.html]<br />
* [Feb 10] Jin et al. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex [http://www.nature.com/neuro/journal/v14/n2/full/nn.2729.html]<br />
<br />
* [Jan 20] A review of NIPS 2010 papers. [http://books.nips.cc/nips23.html]<br />
<br />
=== Fall 2010 ===<br />
<br />
* [Dec 9] Welling. Herding algorithms. [http://www.ics.uci.edu/%7Ewelling/publications/publications.html]<br />
<br />
* [Dec 2] Neal. MCMC using Hamiltonian dynamics. [http://www.cs.utoronto.ca/~radford/ftp/ham-mcmc.ps]<br />
<br />
* [Nov 18] Mairal et al. Task-driven dictionary learning. [http://arxiv.org/abs/1009.5358]<br />
<br />
* [Oct 21] Hamed. Self-referential dynamical systems for the self-organization of behavior in robotic systems. Ch 2-3 of [http://robot.informatik.uni-leipzig.de/research/publications/2007/ThesisHamed.pdf] <br />
<br />
* [Oct 14] Hammond, Vandergheynst, and Gribonval. Wavelets on graphs via spectral graph theory. [http://dx.doi.org/10.1016/j.acha.2010.04.005]<br />
<br />
* [Oct 7] Neal. Annealed importance sampling. [http://www.springerlink.com/content/x40w6w4r1142651k/]<br />
<br />
* [Sep 30] Martius, Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. [http://www.springerlink.com/content/80013g12784261n3/]<br />
<br />
* [Sep 23] Bullier, Jean. "What is Fed Back?" in 23 Problems in Systems Neuroscience. [https://docs.google.com/fileview?id=0B30YK-ZtKoHXNGVlYjNlODUtNTYyOS00OTJmLWJjNWItNmUwNTljYmE3M2U4&hl=en&authkey=CMvu2LAM]<br />
<br />
=== [[Past TCN Papers]] ===<br />
<br />
=== Time and Location ===<br />
'''12:00-1:00pm''' every Thursday in the Redwood Center conference room (560 Evans). Please sign up to the email list (below) for announcements on meeting dates.<br />
<br />
=== Overview ===<br />
This journal club is aimed at graduate students from the neuroscience program, neuroscience related life sciences, as well as students from engineering, physics, and math programs with an interest in a computational approach to studying the brain. It provides a broad survey of literature from theoretical and computational neuroscience. Readings will combine both seminal works and recent theories. We meet for one session each week.<br />
<br />
It is possible to take this seminar for credit. If you would like to do so, please mention during journal club.<br />
<br />
If you have questions, please email the club organizer James Arnemann.<br />
<br />
=== E-mail List ===<br />
<br />
To subscribe to the journal club email list, visit [https://calmail.berkeley.edu/manage/list/listinfo/redwood_tcn@lists.berkeley.edu link]. You will receive emails twice a week about papers that will be covered in the next meeting.<br />
<br />
=== Guidelines for Presenting Papers ===<br />
Each person that selects a paper should present, in about 15-30 minutes:<br />
* an executive summary<br />
* an outline of the key points, ideas, or contributions<br />
* relevant background information<br />
* a description of the key figures<br />
* what you took away from the paper<br />
* some potential questions for discussion<br />
* you are encouraged to use whatever method to present (slides, puppets, etc.)<br />
<br />
===[[Suggestion Board]]===</div>Giselyhttps://rctn.org/w/index.php?title=TCN&diff=8407TCN2015-12-08T01:36:33Z<p>Gisely: /* Fall 2015 */</p>
<hr />
<div>== Topics in Computational Neuroscience ==<br />
<br />
===Spring 2016===<br />
[Jan 28] Sean Mackesey<br />
<br />
[Jan 21] Dylan Paiton - R Goroshin, J Bruna, J Tompson, D Eigen, Y LeCun (2015). Unsupervised Learning of Spatiotemporally Coherent Metrics [http://arxiv.org/pdf/1412.6056.pdf] <br />
<br />
[Jan 14] Daniel Toker<br />
<br />
===Fall 2015===<br />
<br />
[Dec 17] Mojtaba Sahraee<br />
<br />
[Dec 10] Eric Weiss<br />
<br />
[Dec 03] Guy Isley - (Expectation Backpropagation | http://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights.pdf). D Soudry et al 2014.<br />
<br />
[Nov 26] Thanksgiving break<br />
<br />
[Nov 19] Eric Dodds - EC Smith, MS Lewicki (2006). Efficient Auditory Coding [http://www.nature.com/nature/journal/v439/n7079/full/nature04485.html]<br />
<br />
Nov 12] Vasha Dutell - H Hosoya, A Hyvarinen (2015). A Hierarchical Statistical Model of Natural Images Explains Tuning Properties in V2 [http://www.jneurosci.org/content/35/29/10412.full]<br />
<br />
=== Summer 2014 ===<br />
<br />
* [June 19] Buzsaki & Mizuseki (2014). The log-dynamic brain: how skewed distributions affect network operations. [http://buzsakilab.com/content/PDFs/Mizuseki2014.pdf]<br />
<br />
* [June 12] Hukushima & Nemoto (1996). Exchange Monte Carlo method and application to spin glass simulations. [http://arxiv.org/pdf/cond-mat/9512035v1.pdf]<br />
<br />
* [June 5] Shi & Griffiths (2009). Neural implementation of hierarchical bayesian inference by importance sampling. [http://cocosci.berkeley.edu/tom/papers/neuralIS.pdf]<br />
<br />
* [May 29] Petersen & Crochet (2013). Synaptic computation and sensory processing in neocortical layer 2/3. [http://www.sciencedirect.com/science/article/pii/S0896627313002675]<br />
<br />
* [May 22] Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925-933 [http://www.neurobio.ucla.edu/~dbuono/PDFs/LajeBuonomano_NatNeurosci_13.pdf]<br />
<br />
=== Spring 2014 ===<br />
<br />
* [Jan 20] Sutskever 2012- Training Recurrent Neural Networks. [http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf]<br />
<br />
=== Fall 2013 ===<br />
<br />
* [Sep 18] Guillery & Sherman 2010 - Branched thalamic afferents: What are the messages that they relay to the cortex? [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657838/#!po=22.7273]<br />
<br />
=== Summer 2013 ===<br />
<br />
* [July 10] Curto & Itskov 2008 - Cell Groups Reveal Structure of Stimulus Space [http://www.math.unl.edu/~ccurto2/papers/cell-groups.pdf]<br />
<br />
=== Spring 2013 ===<br />
<br />
* [Apr 8] Burak et al. 2009 - Accurate Path Integration in Continuous Attractor Network Models of Grid Cells [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=Y&aulast=Burak&atitle=Accurate+path+integration+in+continuous+attractor+network+models+of+grid+cells&id=doi:10.1371/journal.pcbi.1000291&title=PLoS+Computational+Biology&volume=5&issue=2&date=2009&spage=e1000291&issn=1553-734X] [http://www.jneurosci.org/content/16/6/2112.full.pdf]<br />
<br />
* [Mar 27] Sreenivasan et al. 2011 - Grid cells generate an analog error-correcting code for singularly precise neural computation. [http://www.clm.utexas.edu/fietelab/Papers/nn.2901.pdf] <br />
<br />
* [Mar 20] Killian et al. - A map of visual space in the primate entorhinal cortex [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=NJ&aulast=Killian&atitle=A+map+of+visual+space+in+the+primate+entorhinal+cortex&id=doi:10.1038/nature11587&title=Nature&volume=491&issue=7426&date=2012&spage=761&issn=0028-0836]<br />
<br />
* [Mar 13] Doyle et al. 2011 - Architecture, constraints and behavior [http://www.pnas.org/content/108/suppl.3/15624.full.pdf+html]<br />
<br />
* [Mar 6] Grady 2006 - Random Walks for Image Segmentation [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1704833&tag=1]<br />
<br />
* [Feb 27] Todorov 2012 - Parallels between sensory and motor information processing [http://scholar.google.com/scholar?hl=en&q=Parallels+between+sensory+and+motor+information+processing&btnG=&as_sdt=1%2C5&as_sdtp=]<br />
<br />
* [Feb 13] Sohl-Dickstein 2012 - The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use [http://arxiv.org/pdf/1205.1828v1.pdf]<br />
<br />
* [Feb 06] Girosi 1998 - An Equivalence Between Sparse Approximation and Support Vector Machines [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=F&aulast=Girosi&atitle=An+equivalence+between+sparse+approximation+and+support+vector+machines&id=doi:10.1162/089976698300017269&title=Neural+computation&volume=10&issue=6&date=1998&spage=1455&issn=0899-7667][http://18.7.29.232/bitstream/handle/1721.1/7289/AIM-1606.pdf?sequence=2]<br />
<br />
* [Jan 30] Zipser et al. 1996 - Contextual Modulation in Primary Visual Cortex [http://www.jneurosci.org/content/16/22/7376.full.pdf+html]<br />
Ayzenshtat et al. 2012 - Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing [http://www.jneurosci.org/content/32/40/13971.full.pdf+html]<br />
<br />
* [Jan 23] Gillenwater et al. 2012 - Near-Optimal MAP Inference for Determinantal Point Processes [http://books.nips.cc/papers/files/nips25/NIPS2012_1264.pdf] [http://web.eecs.umich.edu/~kulesza/pubs/thesis.pdf]<br />
<br />
* [Jan 08] Cathart-Harris et al. 2012 - Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin [http://www.pnas.org/content/109/6/2138.long]<br />
<br />
=== Fall 2012 ===<br />
<br />
* [Aug 22] Maass et al. - Liquid State Computing [http://www.igi.tugraz.at/maass/psfiles/130.pdf] [http://www.igi.tugraz.at/maass/psfiles/186.pdf]<br />
<br />
* [Aug 29] Salakhutdinov & Hinton 2012 - An Efficient Learning Procedure for Deep Boltzmann Machines [http://www.mitpressjournals.org/doi/pdfplus/10.1162/NECO_a_00311]<br />
<br />
* [Sep 05] Coates & Ng 2011 - An Analysis of Single-Layer Networks in Unsupervised Feature Learning [http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf]<br />
<br />
* [Sep 12] Quiroga 2012 - Concept Cells: The Building Blocks of Declarative Memory Functions [http://www.phys.psu.edu/~collins/RNI/Quian_Quiroga_concept_cell_review_2012.pdf]<br />
<br />
* [Oct 10] Moira & Bialek 2011 - Are Biological Systems Poised at Critcality? [http://www.sns.ias.edu/pitp/2012files/Are_Biological_Systems.pdf]<br />
<br />
* [Oct 17] Newman 2005 -Power laws, Pareto distributions and Zipfʼs law. [http://www-personal.umich.edu/~mejn/courses/2006/cmplxsys899/powerlaws.pdf]<br />
<br />
* [Nov 28] Todorov 2004 -Optimality Principles in Sensorimotor Control. [http://www.nature.com/neuro/journal/v7/n9/pdf/nn1309.pdf]<br />
<br />
=== Spring 2011 ===<br />
<br />
* [Feb 10] Anastassiou et al. Ephaptic coupling of cortical neurons [http://www.nature.com/neuro/journal/v14/n2/full/nn.2727.html]<br />
* [Feb 10] Jin et al. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex [http://www.nature.com/neuro/journal/v14/n2/full/nn.2729.html]<br />
<br />
* [Jan 20] A review of NIPS 2010 papers. [http://books.nips.cc/nips23.html]<br />
<br />
=== Fall 2010 ===<br />
<br />
* [Dec 9] Welling. Herding algorithms. [http://www.ics.uci.edu/%7Ewelling/publications/publications.html]<br />
<br />
* [Dec 2] Neal. MCMC using Hamiltonian dynamics. [http://www.cs.utoronto.ca/~radford/ftp/ham-mcmc.ps]<br />
<br />
* [Nov 18] Mairal et al. Task-driven dictionary learning. [http://arxiv.org/abs/1009.5358]<br />
<br />
* [Oct 21] Hamed. Self-referential dynamical systems for the self-organization of behavior in robotic systems. Ch 2-3 of [http://robot.informatik.uni-leipzig.de/research/publications/2007/ThesisHamed.pdf] <br />
<br />
* [Oct 14] Hammond, Vandergheynst, and Gribonval. Wavelets on graphs via spectral graph theory. [http://dx.doi.org/10.1016/j.acha.2010.04.005]<br />
<br />
* [Oct 7] Neal. Annealed importance sampling. [http://www.springerlink.com/content/x40w6w4r1142651k/]<br />
<br />
* [Sep 30] Martius, Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. [http://www.springerlink.com/content/80013g12784261n3/]<br />
<br />
* [Sep 23] Bullier, Jean. "What is Fed Back?" in 23 Problems in Systems Neuroscience. [https://docs.google.com/fileview?id=0B30YK-ZtKoHXNGVlYjNlODUtNTYyOS00OTJmLWJjNWItNmUwNTljYmE3M2U4&hl=en&authkey=CMvu2LAM]<br />
<br />
=== [[Past TCN Papers]] ===<br />
<br />
=== Time and Location ===<br />
'''12:00-1:00pm''' every Thursday in the Redwood Center conference room (560 Evans). Please sign up to the email list (below) for announcements on meeting dates.<br />
<br />
=== Overview ===<br />
This journal club is aimed at graduate students from the neuroscience program, neuroscience related life sciences, as well as students from engineering, physics, and math programs with an interest in a computational approach to studying the brain. It provides a broad survey of literature from theoretical and computational neuroscience. Readings will combine both seminal works and recent theories. We meet for one session each week.<br />
<br />
It is possible to take this seminar for credit. If you would like to do so, please mention during journal club.<br />
<br />
If you have questions, please email the club organizer James Arnemann.<br />
<br />
=== E-mail List ===<br />
<br />
To subscribe to the journal club email list, visit [https://calmail.berkeley.edu/manage/list/listinfo/redwood_tcn@lists.berkeley.edu link]. You will receive emails twice a week about papers that will be covered in the next meeting.<br />
<br />
=== Guidelines for Presenting Papers ===<br />
Each person that selects a paper should present, in about 15-30 minutes:<br />
* an executive summary<br />
* an outline of the key points, ideas, or contributions<br />
* relevant background information<br />
* a description of the key figures<br />
* what you took away from the paper<br />
* some potential questions for discussion<br />
* you are encouraged to use whatever method to present (slides, puppets, etc.)<br />
<br />
===[[Suggestion Board]]===</div>Giselyhttps://rctn.org/w/index.php?title=TCN&diff=8406TCN2015-12-08T01:36:06Z<p>Gisely: /* Fall 2015 */</p>
<hr />
<div>== Topics in Computational Neuroscience ==<br />
<br />
===Spring 2016===<br />
[Jan 28] Sean Mackesey<br />
<br />
[Jan 21] Dylan Paiton - R Goroshin, J Bruna, J Tompson, D Eigen, Y LeCun (2015). Unsupervised Learning of Spatiotemporally Coherent Metrics [http://arxiv.org/pdf/1412.6056.pdf] <br />
<br />
[Jan 14] Daniel Toker<br />
<br />
===Fall 2015===<br />
<br />
[Dec 17] Mojtaba Sahraee<br />
<br />
[Dec 10] Eric Weiss<br />
<br />
[Dec 03] Guy Isley - [Expectation Backpropagation](http://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights.pdf). D Soudry et al 2014.<br />
<br />
[Nov 26] Thanksgiving break<br />
<br />
[Nov 19] Eric Dodds - EC Smith, MS Lewicki (2006). Efficient Auditory Coding [http://www.nature.com/nature/journal/v439/n7079/full/nature04485.html]<br />
<br />
Nov 12] Vasha Dutell - H Hosoya, A Hyvarinen (2015). A Hierarchical Statistical Model of Natural Images Explains Tuning Properties in V2 [http://www.jneurosci.org/content/35/29/10412.full]<br />
<br />
=== Summer 2014 ===<br />
<br />
* [June 19] Buzsaki & Mizuseki (2014). The log-dynamic brain: how skewed distributions affect network operations. [http://buzsakilab.com/content/PDFs/Mizuseki2014.pdf]<br />
<br />
* [June 12] Hukushima & Nemoto (1996). Exchange Monte Carlo method and application to spin glass simulations. [http://arxiv.org/pdf/cond-mat/9512035v1.pdf]<br />
<br />
* [June 5] Shi & Griffiths (2009). Neural implementation of hierarchical bayesian inference by importance sampling. [http://cocosci.berkeley.edu/tom/papers/neuralIS.pdf]<br />
<br />
* [May 29] Petersen & Crochet (2013). Synaptic computation and sensory processing in neocortical layer 2/3. [http://www.sciencedirect.com/science/article/pii/S0896627313002675]<br />
<br />
* [May 22] Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925-933 [http://www.neurobio.ucla.edu/~dbuono/PDFs/LajeBuonomano_NatNeurosci_13.pdf]<br />
<br />
=== Spring 2014 ===<br />
<br />
* [Jan 20] Sutskever 2012- Training Recurrent Neural Networks. [http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf]<br />
<br />
=== Fall 2013 ===<br />
<br />
* [Sep 18] Guillery & Sherman 2010 - Branched thalamic afferents: What are the messages that they relay to the cortex? [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657838/#!po=22.7273]<br />
<br />
=== Summer 2013 ===<br />
<br />
* [July 10] Curto & Itskov 2008 - Cell Groups Reveal Structure of Stimulus Space [http://www.math.unl.edu/~ccurto2/papers/cell-groups.pdf]<br />
<br />
=== Spring 2013 ===<br />
<br />
* [Apr 8] Burak et al. 2009 - Accurate Path Integration in Continuous Attractor Network Models of Grid Cells [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=Y&aulast=Burak&atitle=Accurate+path+integration+in+continuous+attractor+network+models+of+grid+cells&id=doi:10.1371/journal.pcbi.1000291&title=PLoS+Computational+Biology&volume=5&issue=2&date=2009&spage=e1000291&issn=1553-734X] [http://www.jneurosci.org/content/16/6/2112.full.pdf]<br />
<br />
* [Mar 27] Sreenivasan et al. 2011 - Grid cells generate an analog error-correcting code for singularly precise neural computation. [http://www.clm.utexas.edu/fietelab/Papers/nn.2901.pdf] <br />
<br />
* [Mar 20] Killian et al. - A map of visual space in the primate entorhinal cortex [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=NJ&aulast=Killian&atitle=A+map+of+visual+space+in+the+primate+entorhinal+cortex&id=doi:10.1038/nature11587&title=Nature&volume=491&issue=7426&date=2012&spage=761&issn=0028-0836]<br />
<br />
* [Mar 13] Doyle et al. 2011 - Architecture, constraints and behavior [http://www.pnas.org/content/108/suppl.3/15624.full.pdf+html]<br />
<br />
* [Mar 6] Grady 2006 - Random Walks for Image Segmentation [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1704833&tag=1]<br />
<br />
* [Feb 27] Todorov 2012 - Parallels between sensory and motor information processing [http://scholar.google.com/scholar?hl=en&q=Parallels+between+sensory+and+motor+information+processing&btnG=&as_sdt=1%2C5&as_sdtp=]<br />
<br />
* [Feb 13] Sohl-Dickstein 2012 - The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use [http://arxiv.org/pdf/1205.1828v1.pdf]<br />
<br />
* [Feb 06] Girosi 1998 - An Equivalence Between Sparse Approximation and Support Vector Machines [http://ucelinks.cdlib.org:8888/sfx_local?sid=google&auinit=F&aulast=Girosi&atitle=An+equivalence+between+sparse+approximation+and+support+vector+machines&id=doi:10.1162/089976698300017269&title=Neural+computation&volume=10&issue=6&date=1998&spage=1455&issn=0899-7667][http://18.7.29.232/bitstream/handle/1721.1/7289/AIM-1606.pdf?sequence=2]<br />
<br />
* [Jan 30] Zipser et al. 1996 - Contextual Modulation in Primary Visual Cortex [http://www.jneurosci.org/content/16/22/7376.full.pdf+html]<br />
Ayzenshtat et al. 2012 - Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing [http://www.jneurosci.org/content/32/40/13971.full.pdf+html]<br />
<br />
* [Jan 23] Gillenwater et al. 2012 - Near-Optimal MAP Inference for Determinantal Point Processes [http://books.nips.cc/papers/files/nips25/NIPS2012_1264.pdf] [http://web.eecs.umich.edu/~kulesza/pubs/thesis.pdf]<br />
<br />
* [Jan 08] Cathart-Harris et al. 2012 - Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin [http://www.pnas.org/content/109/6/2138.long]<br />
<br />
=== Fall 2012 ===<br />
<br />
* [Aug 22] Maass et al. - Liquid State Computing [http://www.igi.tugraz.at/maass/psfiles/130.pdf] [http://www.igi.tugraz.at/maass/psfiles/186.pdf]<br />
<br />
* [Aug 29] Salakhutdinov & Hinton 2012 - An Efficient Learning Procedure for Deep Boltzmann Machines [http://www.mitpressjournals.org/doi/pdfplus/10.1162/NECO_a_00311]<br />
<br />
* [Sep 05] Coates & Ng 2011 - An Analysis of Single-Layer Networks in Unsupervised Feature Learning [http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf]<br />
<br />
* [Sep 12] Quiroga 2012 - Concept Cells: The Building Blocks of Declarative Memory Functions [http://www.phys.psu.edu/~collins/RNI/Quian_Quiroga_concept_cell_review_2012.pdf]<br />
<br />
* [Oct 10] Moira & Bialek 2011 - Are Biological Systems Poised at Critcality? [http://www.sns.ias.edu/pitp/2012files/Are_Biological_Systems.pdf]<br />
<br />
* [Oct 17] Newman 2005 -Power laws, Pareto distributions and Zipfʼs law. [http://www-personal.umich.edu/~mejn/courses/2006/cmplxsys899/powerlaws.pdf]<br />
<br />
* [Nov 28] Todorov 2004 -Optimality Principles in Sensorimotor Control. [http://www.nature.com/neuro/journal/v7/n9/pdf/nn1309.pdf]<br />
<br />
=== Spring 2011 ===<br />
<br />
* [Feb 10] Anastassiou et al. Ephaptic coupling of cortical neurons [http://www.nature.com/neuro/journal/v14/n2/full/nn.2727.html]<br />
* [Feb 10] Jin et al. Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex [http://www.nature.com/neuro/journal/v14/n2/full/nn.2729.html]<br />
<br />
* [Jan 20] A review of NIPS 2010 papers. [http://books.nips.cc/nips23.html]<br />
<br />
=== Fall 2010 ===<br />
<br />
* [Dec 9] Welling. Herding algorithms. [http://www.ics.uci.edu/%7Ewelling/publications/publications.html]<br />
<br />
* [Dec 2] Neal. MCMC using Hamiltonian dynamics. [http://www.cs.utoronto.ca/~radford/ftp/ham-mcmc.ps]<br />
<br />
* [Nov 18] Mairal et al. Task-driven dictionary learning. [http://arxiv.org/abs/1009.5358]<br />
<br />
* [Oct 21] Hamed. Self-referential dynamical systems for the self-organization of behavior in robotic systems. Ch 2-3 of [http://robot.informatik.uni-leipzig.de/research/publications/2007/ThesisHamed.pdf] <br />
<br />
* [Oct 14] Hammond, Vandergheynst, and Gribonval. Wavelets on graphs via spectral graph theory. [http://dx.doi.org/10.1016/j.acha.2010.04.005]<br />
<br />
* [Oct 7] Neal. Annealed importance sampling. [http://www.springerlink.com/content/x40w6w4r1142651k/]<br />
<br />
* [Sep 30] Martius, Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. [http://www.springerlink.com/content/80013g12784261n3/]<br />
<br />
* [Sep 23] Bullier, Jean. "What is Fed Back?" in 23 Problems in Systems Neuroscience. [https://docs.google.com/fileview?id=0B30YK-ZtKoHXNGVlYjNlODUtNTYyOS00OTJmLWJjNWItNmUwNTljYmE3M2U4&hl=en&authkey=CMvu2LAM]<br />
<br />
=== [[Past TCN Papers]] ===<br />
<br />
=== Time and Location ===<br />
'''12:00-1:00pm''' every Thursday in the Redwood Center conference room (560 Evans). Please sign up to the email list (below) for announcements on meeting dates.<br />
<br />
=== Overview ===<br />
This journal club is aimed at graduate students from the neuroscience program, neuroscience related life sciences, as well as students from engineering, physics, and math programs with an interest in a computational approach to studying the brain. It provides a broad survey of literature from theoretical and computational neuroscience. Readings will combine both seminal works and recent theories. We meet for one session each week.<br />
<br />
It is possible to take this seminar for credit. If you would like to do so, please mention during journal club.<br />
<br />
If you have questions, please email the club organizer James Arnemann.<br />
<br />
=== E-mail List ===<br />
<br />
To subscribe to the journal club email list, visit [https://calmail.berkeley.edu/manage/list/listinfo/redwood_tcn@lists.berkeley.edu link]. You will receive emails twice a week about papers that will be covered in the next meeting.<br />
<br />
=== Guidelines for Presenting Papers ===<br />
Each person that selects a paper should present, in about 15-30 minutes:<br />
* an executive summary<br />
* an outline of the key points, ideas, or contributions<br />
* relevant background information<br />
* a description of the key figures<br />
* what you took away from the paper<br />
* some potential questions for discussion<br />
* you are encouraged to use whatever method to present (slides, puppets, etc.)<br />
<br />
===[[Suggestion Board]]===</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8335Cluster2015-09-12T05:33:19Z<p>Gisely: /* General Information */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs), and one very clever wombat who can optimize your neural network for you if you ask nicely. The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. Lastly, if you want to do long GPU computations. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''SLURM''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
In brief, we have 14 nodes with over 60 cores and 4 GPUs.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM ===<br />
<br />
SLURM is our scheduler. It is very important you understand SLURM well to have a good time doing research on the cluster. SLURM is our administrator on the cluster, it helps you find resources for your job. It also helps others do the same, so we are not stepping on each others' toes. There are some do's and don'ts with using SLURM.<br />
<br />
* Logging in -- when you login to the cluster, you end up landing on the login node. We do not own the login node and share this with other members of the Berkeley Research Consortium. So, it is important not to run anything here *at all*<br />
<br />
* Information on Submitting, Monitoring, Reviewing Jobs can be found here. You can do many simple BASH tricks to submit a large number of embarrassingly parallel jobs on the cluster. This is great for parameter sweeps. <br />
<br />
* Storage -- every user gets a 10 GB quota gratis from the BRC. This is your home folder or where you land when you login. In addition to this there's a 20TB scratch space (/clusterfs/cortex/scratch) shared by all members of the Redwood Center. We have a log of how much space is being used by each member who writes into the scratch folder at (TODO)<br />
<br />
* We have 4 GPU nodes and information on requesting and using them can be found here. When you request a GPU as a resource, you get the whole node along with it. <br />
<br />
* We have a debug queue that can be requested for research here<br />
<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
== Cluster Administration ==<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
= Job Management =<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. If your are planning to run jobs on the cluster you should be using SLURM! Learn how [http://redwood.berkeley.edu/wiki/Cluster_Job_Management here].<br />
<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8334Cluster2015-09-12T05:33:05Z<p>Gisely: /* General Information */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs), and a very clever wombat who can optimize your neural network for you if you ask nicely. The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. Lastly, if you want to do long GPU computations. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''SLURM''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
In brief, we have 14 nodes with over 60 cores and 4 GPUs.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM ===<br />
<br />
SLURM is our scheduler. It is very important you understand SLURM well to have a good time doing research on the cluster. SLURM is our administrator on the cluster, it helps you find resources for your job. It also helps others do the same, so we are not stepping on each others' toes. There are some do's and don'ts with using SLURM.<br />
<br />
* Logging in -- when you login to the cluster, you end up landing on the login node. We do not own the login node and share this with other members of the Berkeley Research Consortium. So, it is important not to run anything here *at all*<br />
<br />
* Information on Submitting, Monitoring, Reviewing Jobs can be found here. You can do many simple BASH tricks to submit a large number of embarrassingly parallel jobs on the cluster. This is great for parameter sweeps. <br />
<br />
* Storage -- every user gets a 10 GB quota gratis from the BRC. This is your home folder or where you land when you login. In addition to this there's a 20TB scratch space (/clusterfs/cortex/scratch) shared by all members of the Redwood Center. We have a log of how much space is being used by each member who writes into the scratch folder at (TODO)<br />
<br />
* We have 4 GPU nodes and information on requesting and using them can be found here. When you request a GPU as a resource, you get the whole node along with it. <br />
<br />
* We have a debug queue that can be requested for research here<br />
<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
== Cluster Administration ==<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
= Job Management =<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. If your are planning to run jobs on the cluster you should be using SLURM! Learn how [http://redwood.berkeley.edu/wiki/Cluster_Job_Management here].<br />
<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8333Cluster2015-09-12T00:56:35Z<p>Gisely: /* General Information */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. Lastly, if you want to do long GPU computations. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''SLURM''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
In brief, we have 14 nodes with over 60 cores and 4 GPUs.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM ===<br />
<br />
SLURM is our scheduler. It is very important you understand SLURM well to have a good time doing research on the cluster. SLURM is our administrator on the cluster, it helps you find resources for your job. It also helps others do the same, so we are not stepping on each others' toes. There are some do's and don'ts with using SLURM.<br />
<br />
* Logging in -- when you login to the cluster, you end up landing on the login node. We do not own the login node and share this with other members of the Berkeley Research Consortium. So, it is important not to run anything here *at all*<br />
<br />
* Information on Submitting, Monitoring, Reviewing Jobs can be found here. You can do many simple BASH tricks to submit a large number of embarrassingly parallel jobs on the cluster. This is great for parameter sweeps. <br />
<br />
* Storage -- every user gets a 10 GB quota gratis from the BRC. This is your home folder or where you land when you login. In addition to this there's a 20TB scratch space (/clusterfs/cortex/scratch) shared by all members of the Redwood Center. We have a log of how much space is being used by each member who writes into the scratch folder at (TODO)<br />
<br />
* We have 4 GPU nodes and information on requesting and using them can be found here. When you request a GPU as a resource, you get the whole node along with it. <br />
<br />
* We have a debug queue that can be requested for research here<br />
<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
== Cluster Administration ==<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
= Job Management =<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. If your are planning to run jobs on the cluster you should be using SLURM! Learn how [http://redwood.berkeley.edu/wiki/Cluster_Job_Management here].<br />
<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8332Cluster2015-09-12T00:55:53Z<p>Gisely: /* General Information */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. Lastly, if you want to do long GPU computations. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''SLURM''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
In brief, we have 14 nodes with over 60 cores and 4 GPUs.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM ===<br />
<br />
SLURM is our scheduler. It is very important you understand SLURM well to have a good time doing research on the cluster. SLURM is our administrator on the cluster, it helps you find resources for your job. It also helps others do the same, so we are not stepping on each others' toes. There are some do's and don'ts with using SLURM.<br />
<br />
* Logging in -- when you login to the cluster, you end up landing on the login node. We do not own the login node and share this with other members of the Berkeley Research Consortium. So, it is important not to run anything here *at all*<br />
<br />
* Information on Submitting, Monitoring, Reviewing Jobs can be found here. You can do many simple BASH tricks to submit a large number of embarrassingly parallel jobs on the cluster. This is great for parameter sweeps. <br />
<br />
* Storage -- every user gets a 10 GB quota gratis from the BRC. This is your home folder or where you land when you login. In addition to this there's a 20TB scratch space (/clusterfs/cortex/scratch) shared by all members of the Redwood Center. We have a log of how much space is being used by each member who writes into the scratch folder at (TODO)<br />
<br />
* We have 4 GPU nodes and information on requesting and using them can be found here. When you request a GPU as a resource, you get the whole node along with it. <br />
<br />
* We have a debug queue that can be requested for research here<br />
<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
=== Cluster Administration ===<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
= Job Management =<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. If your are planning to run jobs on the cluster you should be using SLURM! Learn how [http://redwood.berkeley.edu/wiki/Cluster_Job_Management here].<br />
<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8331Cluster2015-09-12T00:33:11Z<p>Gisely: /* Software */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. Lastly, if you want to do long GPU computations. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''SLURM''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
=== Cluster Administration ===<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
In brief, we have 14 nodes with over 60 cores and 4 GPUs.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM ===<br />
<br />
SLURM is our scheduler. It is very important you understand SLURM well to have a good time doing research on the cluster. SLURM is our administrator on the cluster, it helps you find resources for your job. It also helps others do the same, so we are not stepping on each others' toes. There are some do's and don'ts with using SLURM.<br />
<br />
* Logging in -- when you login to the cluster, you end up landing on the login node. We do not own the login node and share this with other members of the Berkeley Research Consortium. So, it is important not to run anything here *at all*<br />
<br />
* Information on Submitting, Monitoring, Reviewing Jobs can be found here. You can do many simple BASH tricks to submit a large number of embarrassingly parallel jobs on the cluster. This is great for parameter sweeps. <br />
<br />
* Storage -- every user gets a 10 GB quota gratis from the BRC. This is your home folder or where you land when you login. In addition to this there's a 20TB scratch space (/clusterfs/cortex/scratch) shared by all members of the Redwood Center. We have a log of how much space is being used by each member who writes into the scratch folder at (TODO)<br />
<br />
* We have 4 GPU nodes and information on requesting and using them can be found here. When you request a GPU as a resource, you get the whole node along with it. <br />
<br />
* We have a debug queue that can be requested for research here<br />
<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
= Job Management =<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. If your are planning to run jobs on the cluster you should be using SLURM! Learn how [http://redwood.berkeley.edu/wiki/Cluster_Job_Management here].<br />
<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8330Cluster Job Management2015-09-12T00:26:07Z<p>Gisely: /* Common SLURM arguments */</p>
<hr />
<div>==Overview==<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. This page provides some information about common usage patterns for SLURM and cluster etiquette. <br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
== Basic job submission ==<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is a shell script that executes your program. Slurm has several parameters that need to specified. For example, you might like to specify an output file for your job. This can done using the -o argument:<br />
<br />
sbatch -o outputfile.txt myscript.sh<br />
<br />
Remembering to include all the arguments you need can get cumbersome. An easier way is to include the slurm parameters in the header of the script that you submit. This done by putting one or more lines starting with #SBATCH near the start of your file. For example, here is simple script that does specifies output file in the script (rather than as command line argument):<br />
<br />
#!/bin/bash -l<br />
#SBATCH -o myscript.sh<br />
...<br />
<br />
Where ... is the body of your script.<br />
<br />
== Common SLURM arguments ==<br />
<br />
It's a good idea to assign your job a name. This helps make your job identifiable when using other command such as squeue (see below). To do this use the -J argument:<br />
<br />
-J YOUR_JOB_NAME<br />
<br />
Good cluster etiquette dictates that you set a walltime (i.e. an upper bound on how long it can run) on your script. This helps SLURM fairly schedule jobs. For example to limit your job to one hour of execution time use<br />
<br />
--time=01:00:00<br />
<br />
In addition capturing standard output (-o) of your process you can also capture the standard error (-e) output (e.g. if running job causes compile errors):<br />
<br />
-e errorfile.txt<br />
<br />
== Monitoring Jobs ==<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
== Interactive Sessions ==<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
== Perceus commands ==<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
== Finding out the list of occupants on each cluster node ==<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8328Cluster Job Management2015-09-12T00:24:30Z<p>Gisely: /* Basic job submission */</p>
<hr />
<div>==Overview==<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. This page provides some information about common usage patterns for SLURM and cluster etiquette. <br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
== Basic job submission ==<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is a shell script that executes your program. Slurm has several parameters that need to specified. For example, you might like to specify an output file for your job. This can done using the -o argument:<br />
<br />
sbatch -o outputfile.txt myscript.sh<br />
<br />
Remembering to include all the arguments you need can get cumbersome. An easier way is to include the slurm parameters in the header of the script that you submit. This done by putting one or more lines starting with #SBATCH near the start of your file. For example, here is simple script that does specifies output file in the script (rather than as command line argument):<br />
<br />
#!/bin/bash -l<br />
#SBATCH -o myscript.sh<br />
...<br />
<br />
Where ... is the body of your script.<br />
<br />
== Common SLURM arguments ==<br />
<br />
It's a good idea to assign your job a name. This helps make your job identifiable when using other command such as squeue (see below). To do this use the -J argument:<br />
<br />
-J YOUR_JOB_NAME<br />
<br />
Good cluster etiquette dictates that you set a walltime (i.e. an upper bound on how long it can run) on your script. This helps SLURM fairly schedule jobs. For example to limit your job to one hour of execution time use<br />
<br />
--time=01:00:00<br />
<br />
In addition capturing standard output (-o) of your process you can also capture the standard error (-e) output (e.g. if running job causes compile errors):<br />
<br />
-e errorfile.txt<br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
== Perceus commands ==<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
== Finding out the list of occupants on each cluster node ==<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8327Cluster Job Management2015-09-12T00:13:38Z<p>Gisely: /* Basic job submission */</p>
<hr />
<div>==Overview==<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. This page provides some information about common usage patterns for SLURM and cluster etiquette. <br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
== Basic job submission ==<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is a shell script that executes your program. Slurm has several parameters that need to specified. For example, you might like to specify an output file for your job. This can done using the -o argument:<br />
<br />
sbatch -o outputfile.txt myscript.sh<br />
<br />
Remembering to include all the arguments you need can get cumbersome. An easier way is to include the slurm parameters in the header of the script that you submit. This done by putting one or more lines starting with #SBATCH near the start of your file. For example, here is simple script that does specifies output file in the script (rather than as command line argument):<br />
<br />
#!/bin/bash -l<br />
#SBATCH -outputfile myscript.sh<br />
...<br />
<br />
Where ... is the body of script.<br />
<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
== Perceus commands ==<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
== Finding out the list of occupants on each cluster node ==<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8324Cluster Job Management2015-09-11T23:47:21Z<p>Gisely: /* SLURM usage */</p>
<hr />
<div>==Overview==<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. This page provides some information about common usage patterns for SLURM and cluster etiquette. <br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
== Basic job submission ==<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is a shell script that executes your program.<br />
<br />
Slurm has several parameters that need to specified. For example, you might like to specify an output file for your job. This can done using the -o argument:<br />
<br />
sbatch -o outputfile.txt myscript.sh<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
== Perceus commands ==<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
== Finding out the list of occupants on each cluster node ==<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8323Cluster Job Management2015-09-11T23:37:08Z<p>Gisely: /* Overview */</p>
<hr />
<div>==Overview==<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. This page provides some information about common usage patterns for SLURM and cluster etiquette. <br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
== SLURM usage ==<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
== Perceus commands ==<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
== Finding out the list of occupants on each cluster node ==<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8321Cluster Job Management2015-09-11T23:36:15Z<p>Gisely: /* Useful commands */</p>
<hr />
<div>==Overview==<br />
<br />
In order to coordinate our cluster usage patterns fairly, our cluster uses a job manager known as SLURM. This page provides some information about common usage patterns for SLURM and cluster etiquette. <br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster_Job_Management&diff=8320Cluster Job Management2015-09-11T23:30:20Z<p>Gisely: Created page with "== Useful commands == See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SL..."</p>
<hr />
<div>== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster-Software&diff=8318Cluster-Software2015-09-11T23:16:16Z<p>Gisely: /* Obtain GPU lock for Jacket in Matlab */</p>
<hr />
<div>= Software =<br />
<br />
== Matlab ==<br />
<br />
In order to use matlab, you have to load the matlab environment:<br />
<br />
module load matlab/R2013a<br />
<br />
Once the matlab environment is loaded, you can start a matlab session by running<br />
<br />
matlab -nodesktop<br />
<br />
An example SLURM script for running matlab code is<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
module load matlab/R2013a<br />
matlab -nodesktop -r "scriptname. $variable1 $variable2"<br />
<br />
The above script takes a matlab job with scriptname = scriptname and accepts two variables $variable1 and $variable2<br />
<br />
If you would like to see who is using matlab licenses, enter<br />
<br />
lmstat<br />
<br />
== Python ==<br />
=== Anaconda Python Distribution ===<br />
<br />
The Anaconda Python 2.7 or 3.4 Distributions can be loaded through<br />
module load python/anaconda2<br />
or<br />
module load python/anaconda3<br />
respectively. This distribution has NumPy and SciPy built against the Intel MKL BLAS library (multicore BLAS). You will need to get an [https://store.continuum.io/cshop/academicanaconda academic license] from Continuum and copy it to the cluster.<br />
<br />
On the cluster<br />
cd<br />
mkdir .continuum<br />
<br />
On the machine where you downloaded the license file<br />
scp file_name username@hpc.brc.berkeley.edu:/global/home/users/username/.continuum/.<br />
<br />
=== Local Install of Anaconda Python Distribution ===<br />
If you want to manage your own python distribution the Anaconda Python is a very good distribution. To get it, go the the [http://continuum.io/downloads Continuum downloads] page and select the linux distribution (penguin).<br />
Copy the download link address, and then in a terminal on the cluster run:<br />
<br />
wget paste_link_here<br />
This should download a .sh file that can be run with<br />
bash Anaconda-version_info.sh<br />
<br />
== CUDA ==<br />
<br />
CUDA is a library to use the graphics processing units (GPU) on the graphics card for general-purpose computing. We have a separate wiki page to collect information on how to do general-purpose computing on the GPU: [[GPGPU]].<br />
The --constraint={cortex_k40, cortex_fermi} option must be used in order to schedule a node with a GPU.<br />
We have installed the CUDA 6.5 driver and toolkit.<br />
<br />
In order to use CUDA, you have to load the CUDA environment:<br />
<br />
module load cuda<br />
<br />
=== Using Theano ===<br />
By default, Theano expects the default compiler to be gcc, so you'll need to unload the intel compiler.<br />
<br />
module unload intel<br />
<br />
Theano caches certain compiled libraries and these will sometimes cause errors when Theano gets updated. If you are experiencing problems with Theano, you can try clearing the cache with<br />
theano-cache clear<br />
and if you still have problems you can delete the .theano folder from your home directory.<br />
<br />
==== Using the GPU ====<br />
<br />
You must request a GPU node. The Anaconda Python distribution comes with a version of Theano that should work. If you need new Theano features, the development version of Theano can be obtained from the [https://github.com/Theano/Theano github repository], installed locally, and added to your PYTHONPATH if you are using the preinstalled Python verions. If you have a local python install you can install theano with<br />
python setup.py develop<br />
from the repository folder.<br />
Theano must be configured to use the GPU. General information can be found in the [http://deeplearning.net/software/theano/library/config.html Theano documentation], but a working (June 2015) version is to create a .theanorc file in your HOME directory with the contents:<br />
<br />
[global]<br />
root = /global/software/sl-6.x86_64/modules/langs/cuda/6.5/<br />
device = gpu<br />
floatX = float32<br />
force_device=True<br />
<br />
[nvcc]<br />
fastmath = True<br />
<br />
==== Using the CPU ====<br />
<br />
Theano can also run on the CPU. Any of the CPU nodes will work. You will want to have Theano build against the MKL BLAS library that comes with Anaconda and so your .theanorc might look like<br />
<br />
[global]<br />
device = cpu<br />
floatX = float32<br />
ldflags = -lmkl_rt</div>Giselyhttps://rctn.org/w/index.php?title=Cluster-Software&diff=8315Cluster-Software2015-09-11T23:04:35Z<p>Gisely: /* Obtain GPU lock in python */</p>
<hr />
<div>= Software =<br />
<br />
== Matlab ==<br />
<br />
In order to use matlab, you have to load the matlab environment:<br />
<br />
module load matlab/R2013a<br />
<br />
Once the matlab environment is loaded, you can start a matlab session by running<br />
<br />
matlab -nodesktop<br />
<br />
An example SLURM script for running matlab code is<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
module load matlab/R2013a<br />
matlab -nodesktop -r "scriptname. $variable1 $variable2"<br />
<br />
The above script takes a matlab job with scriptname = scriptname and accepts two variables $variable1 and $variable2<br />
<br />
If you would like to see who is using matlab licenses, enter<br />
<br />
lmstat<br />
<br />
== Python ==<br />
=== Anaconda Python Distribution ===<br />
<br />
The Anaconda Python 2.7 or 3.4 Distributions can be loaded through<br />
module load python/anaconda2/anaconda2<br />
or<br />
module load python/anaconda3/anaconda3<br />
respectively. This distribution has NumPy and SciPy built against the Intel MKL BLAS library (multicore BLAS). You will need to get an [https://store.continuum.io/cshop/academicanaconda academic license] from Continuum and copy it to the cluster.<br />
<br />
On the cluster<br />
cd<br />
mkdir .continuum<br />
<br />
On the machine where you downloaded the license file<br />
scp file_name username@hpc.brc.berkeley.edu:/global/home/users/username/.continuum/.<br />
<br />
=== Local Install of Anaconda Python Distribution ===<br />
If you want to manage your own python distribution the Anaconda Python is a very good distribution. To get it, go the the [http://continuum.io/downloads Continuum downloads] page and select the linux distribution (penguin).<br />
Copy the download link address, and then in a terminal on the cluster run:<br />
<br />
wget paste_link_here<br />
This should download a .sh file that can be run with<br />
bash Anaconda-version_info.sh<br />
<br />
== CUDA ==<br />
<br />
CUDA is a library to use the graphics processing units (GPU) on the graphics card for general-purpose computing. We have a separate wiki page to collect information on how to do general-purpose computing on the GPU: [[GPGPU]].<br />
The --constraint={cortex_k40, cortex_fermi} option must be used in order to schedule a node with a GPU.<br />
We have installed the CUDA 6.5 driver and toolkit.<br />
<br />
In order to use CUDA, you have to load the CUDA environment:<br />
<br />
module load cuda<br />
<br />
=== Using Theano ===<br />
By default, Theano expects the default compiler to be gcc, so you'll need to unload the intel compiler.<br />
<br />
module unload intel<br />
<br />
Theano caches certain compiled libraries and these will sometimes cause errors when Theano gets updated. If you are experiencing problems with Theano, you can try clearing the cache with<br />
theano-cache clear<br />
and if you still have problems you can delete the .theano folder from your home directory.<br />
<br />
==== Using the GPU ====<br />
<br />
You must request a GPU node. The Anaconda Python distribution comes with a version of Theano that should work. If you need new Theano features, the development version of Theano can be obtained from the [https://github.com/Theano/Theano github repository], installed locally, and added to your PYTHONPATH if you are using the preinstalled Python verions. If you have a local python install you can install theano with<br />
python setup.py develop<br />
from the repository folder.<br />
Theano must be configured to use the GPU. General information can be found in the [http://deeplearning.net/software/theano/library/config.html Theano documentation], but a working (June 2015) version is to create a .theanorc file in your HOME directory with the contents:<br />
<br />
[global]<br />
root = /global/software/sl-6.x86_64/modules/langs/cuda/6.5/<br />
device = gpu<br />
floatX = float32<br />
force_device=True<br />
<br />
[nvcc]<br />
fastmath = True<br />
<br />
==== Using the CPU ====<br />
<br />
Theano can also run on the CPU. Any of the CPU nodes will work. You will want to have Theano build against the MKL BLAS library that comes with Anaconda and so your .theanorc might look like<br />
<br />
[global]<br />
device = cpu<br />
floatX = float32<br />
ldflags = -lmkl_rt<br />
<br />
=== Obtain GPU lock for Jacket in Matlab ===<br />
<br />
If you are using Matlab, you can obtain a GPU lock by running<br />
<br />
addpath('/clusterfs/cortex/software/gpu_lock');<br />
addpath('/clusterfs/cortex/software/jacket/engine');<br />
gpu_id = obtain_gpu_lock_id();<br />
gselect(gpu_id);<br />
<br />
By default, obtain_gpu_lock() throws an error when all gpu cards are taken.<br />
There is another option: obtain_gpu_lock_id(true) will return -1 in case there<br />
is no card available and you can then write your own code to deal with that<br />
fact.<br />
<br />
ginfo tells you which gpu card you are using.<br />
<br />
The following lines should also be in your .bashrc<br />
<br />
## jacket stuff!<br />
module load cuda<br />
export LD_LIBRARY_PATH=/clusterfs/cortex/software/jacket/engine/lib64:$LD_LIBRARY_PATH</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8314Cluster2015-09-11T23:01:22Z<p>Gisely: /* SLURM usage */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''qsub''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
<br />
To start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
== Mounting Cluster File System ==<br />
Mounting the cluster file system remotely allows you to easily access files on the cluster, and allows you to use local programs to edit code or examine simulation outputs locally (very useful). I often edit the remote code using a text editor running on my local machine. This allows you to take advantage of the niceties of a native editor without having to copy code back and forth before you run a simulation on the cluster.<br />
<br />
On linux distributions you can mount your cluster home directory locally using sshfs [http://fuse.sourceforge.net/sshfs.html]<br />
<br />
sshfs hadley.berkeley.edu: <mount-dir><br />
<br />
On Mac and Windows machines the program ExpanDrive works well (uses Fuse under the hood): [http://www.expandrive.com]<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster-Software&diff=8313Cluster-Software2015-09-11T23:00:35Z<p>Gisely: /* Matlab */</p>
<hr />
<div>= Software =<br />
<br />
== Matlab ==<br />
<br />
In order to use matlab, you have to load the matlab environment:<br />
<br />
module load matlab/R2013a<br />
<br />
Once the matlab environment is loaded, you can start a matlab session by running<br />
<br />
matlab -nodesktop<br />
<br />
An example SLURM script for running matlab code is<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
module load matlab/R2013a<br />
matlab -nodesktop -r "scriptname. $variable1 $variable2"<br />
<br />
The above script takes a matlab job with scriptname = scriptname and accepts two variables $variable1 and $variable2<br />
<br />
If you would like to see who is using matlab licenses, enter<br />
<br />
lmstat<br />
<br />
== Python ==<br />
=== Anaconda Python Distribution ===<br />
<br />
The Anaconda Python 2.7 or 3.4 Distributions can be loaded through<br />
module load python/anaconda2/anaconda2<br />
or<br />
module load python/anaconda3/anaconda3<br />
respectively. This distribution has NumPy and SciPy built against the Intel MKL BLAS library (multicore BLAS). You will need to get an [https://store.continuum.io/cshop/academicanaconda academic license] from Continuum and copy it to the cluster.<br />
<br />
On the cluster<br />
cd<br />
mkdir .continuum<br />
<br />
On the machine where you downloaded the license file<br />
scp file_name username@hpc.brc.berkeley.edu:/global/home/users/username/.continuum/.<br />
<br />
=== Local Install of Anaconda Python Distribution ===<br />
If you want to manage your own python distribution the Anaconda Python is a very good distribution. To get it, go the the [http://continuum.io/downloads Continuum downloads] page and select the linux distribution (penguin).<br />
Copy the download link address, and then in a terminal on the cluster run:<br />
<br />
wget paste_link_here<br />
This should download a .sh file that can be run with<br />
bash Anaconda-version_info.sh<br />
<br />
== CUDA ==<br />
<br />
CUDA is a library to use the graphics processing units (GPU) on the graphics card for general-purpose computing. We have a separate wiki page to collect information on how to do general-purpose computing on the GPU: [[GPGPU]].<br />
The --constraint={cortex_k40, cortex_fermi} option must be used in order to schedule a node with a GPU.<br />
We have installed the CUDA 6.5 driver and toolkit.<br />
<br />
In order to use CUDA, you have to load the CUDA environment:<br />
<br />
module load cuda<br />
<br />
=== Using Theano ===<br />
By default, Theano expects the default compiler to be gcc, so you'll need to unload the intel compiler.<br />
<br />
module unload intel<br />
<br />
Theano caches certain compiled libraries and these will sometimes cause errors when Theano gets updated. If you are experiencing problems with Theano, you can try clearing the cache with<br />
theano-cache clear<br />
and if you still have problems you can delete the .theano folder from your home directory.<br />
<br />
==== Using the GPU ====<br />
<br />
You must request a GPU node. The Anaconda Python distribution comes with a version of Theano that should work. If you need new Theano features, the development version of Theano can be obtained from the [https://github.com/Theano/Theano github repository], installed locally, and added to your PYTHONPATH if you are using the preinstalled Python verions. If you have a local python install you can install theano with<br />
python setup.py develop<br />
from the repository folder.<br />
Theano must be configured to use the GPU. General information can be found in the [http://deeplearning.net/software/theano/library/config.html Theano documentation], but a working (June 2015) version is to create a .theanorc file in your HOME directory with the contents:<br />
<br />
[global]<br />
root = /global/software/sl-6.x86_64/modules/langs/cuda/6.5/<br />
device = gpu<br />
floatX = float32<br />
force_device=True<br />
<br />
[nvcc]<br />
fastmath = True<br />
<br />
==== Using the CPU ====<br />
<br />
Theano can also run on the CPU. Any of the CPU nodes will work. You will want to have Theano build against the MKL BLAS library that comes with Anaconda and so your .theanorc might look like<br />
<br />
[global]<br />
device = cpu<br />
floatX = float32<br />
ldflags = -lmkl_rt<br />
<br />
=== Obtain GPU lock in python ===<br />
<br />
If you would like to use one of the GPU cards on node n0000 or n0001, please obtain a GPU lock to make sure the card is not in use and that no one else will be using the card. <br />
<br />
If you are using Python, you can obtain a GPU lock by running<br />
<br />
import gpu_lock<br />
gpu_lock.obtain_lock_id()<br />
<br />
The function either returns the number of the card you can use (0 or 1) or -1 if both cards are in use.<br />
<br />
=== Obtain GPU lock for Jacket in Matlab ===<br />
<br />
If you are using Matlab, you can obtain a GPU lock by running<br />
<br />
addpath('/clusterfs/cortex/software/gpu_lock');<br />
addpath('/clusterfs/cortex/software/jacket/engine');<br />
gpu_id = obtain_gpu_lock_id();<br />
gselect(gpu_id);<br />
<br />
By default, obtain_gpu_lock() throws an error when all gpu cards are taken.<br />
There is another option: obtain_gpu_lock_id(true) will return -1 in case there<br />
is no card available and you can then write your own code to deal with that<br />
fact.<br />
<br />
ginfo tells you which gpu card you are using.<br />
<br />
The following lines should also be in your .bashrc<br />
<br />
## jacket stuff!<br />
module load cuda<br />
export LD_LIBRARY_PATH=/clusterfs/cortex/software/jacket/engine/lib64:$LD_LIBRARY_PATH</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8312Cluster2015-09-11T22:58:39Z<p>Gisely: /* Software */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''qsub''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head [http://redwood.berkeley.edu/wiki/Cluster-Software here]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
== Mounting Cluster File System ==<br />
Mounting the cluster file system remotely allows you to easily access files on the cluster, and allows you to use local programs to edit code or examine simulation outputs locally (very useful). I often edit the remote code using a text editor running on my local machine. This allows you to take advantage of the niceties of a native editor without having to copy code back and forth before you run a simulation on the cluster.<br />
<br />
On linux distributions you can mount your cluster home directory locally using sshfs [http://fuse.sourceforge.net/sshfs.html]<br />
<br />
sshfs hadley.berkeley.edu: <mount-dir><br />
<br />
On Mac and Windows machines the program ExpanDrive works well (uses Fuse under the hood): [http://www.expandrive.com]<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8311Cluster2015-09-11T22:57:46Z<p>Gisely: /* Software */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''qsub''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head (here) [http://redwood.berkeley.edu/wiki/Cluster-Software]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
== Mounting Cluster File System ==<br />
Mounting the cluster file system remotely allows you to easily access files on the cluster, and allows you to use local programs to edit code or examine simulation outputs locally (very useful). I often edit the remote code using a text editor running on my local machine. This allows you to take advantage of the niceties of a native editor without having to copy code back and forth before you run a simulation on the cluster.<br />
<br />
On linux distributions you can mount your cluster home directory locally using sshfs [http://fuse.sourceforge.net/sshfs.html]<br />
<br />
sshfs hadley.berkeley.edu: <mount-dir><br />
<br />
On Mac and Windows machines the program ExpanDrive works well (uses Fuse under the hood): [http://www.expandrive.com]<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8310Cluster2015-09-11T22:57:28Z<p>Gisely: /* Usage Tips TODO */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''qsub''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
= Software =<br />
For information on what software is installed on the cluster and how to access it, head here [http://redwood.berkeley.edu/wiki/Cluster-Software]<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
== Mounting Cluster File System ==<br />
Mounting the cluster file system remotely allows you to easily access files on the cluster, and allows you to use local programs to edit code or examine simulation outputs locally (very useful). I often edit the remote code using a text editor running on my local machine. This allows you to take advantage of the niceties of a native editor without having to copy code back and forth before you run a simulation on the cluster.<br />
<br />
On linux distributions you can mount your cluster home directory locally using sshfs [http://fuse.sourceforge.net/sshfs.html]<br />
<br />
sshfs hadley.berkeley.edu: <mount-dir><br />
<br />
On Mac and Windows machines the program ExpanDrive works well (uses Fuse under the hood): [http://www.expandrive.com]<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Cluster-Software&diff=8309Cluster-Software2015-09-11T22:53:57Z<p>Gisely: Created page with "= Software = == Matlab == Start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here): srun -u -p cortex -t 2:0:0 --pty ba..."</p>
<hr />
<div>= Software =<br />
<br />
== Matlab ==<br />
<br />
Start an interactive session on the cluster (requires specifying the cluster and walltime as is shown here):<br />
<br />
srun -u -p cortex -t 2:0:0 --pty bash -i<br />
<br />
In order to use matlab, you have to load the matlab environment:<br />
<br />
module load matlab/R2013a<br />
<br />
Once the matlab environment is loaded, you can start a matlab session by running<br />
<br />
matlab -nodesktop<br />
<br />
An example SLURM script for running matlab code is<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
module load matlab/R2013a<br />
matlab -nodesktop -r "scriptname. $variable1 $variable2"<br />
<br />
The above script takes a matlab job with scriptname = scriptname and accepts two variables $variable1 and $variable2<br />
<br />
If you would like to see who is using matlab licenses, enter<br />
<br />
lmstat<br />
<br />
== Python ==<br />
=== Anaconda Python Distribution ===<br />
<br />
The Anaconda Python 2.7 or 3.4 Distributions can be loaded through<br />
module load python/anaconda2/anaconda2<br />
or<br />
module load python/anaconda3/anaconda3<br />
respectively. This distribution has NumPy and SciPy built against the Intel MKL BLAS library (multicore BLAS). You will need to get an [https://store.continuum.io/cshop/academicanaconda academic license] from Continuum and copy it to the cluster.<br />
<br />
On the cluster<br />
cd<br />
mkdir .continuum<br />
<br />
On the machine where you downloaded the license file<br />
scp file_name username@hpc.brc.berkeley.edu:/global/home/users/username/.continuum/.<br />
<br />
=== Local Install of Anaconda Python Distribution ===<br />
If you want to manage your own python distribution the Anaconda Python is a very good distribution. To get it, go the the [http://continuum.io/downloads Continuum downloads] page and select the linux distribution (penguin).<br />
Copy the download link address, and then in a terminal on the cluster run:<br />
<br />
wget paste_link_here<br />
This should download a .sh file that can be run with<br />
bash Anaconda-version_info.sh<br />
<br />
== CUDA ==<br />
<br />
CUDA is a library to use the graphics processing units (GPU) on the graphics card for general-purpose computing. We have a separate wiki page to collect information on how to do general-purpose computing on the GPU: [[GPGPU]].<br />
The --constraint={cortex_k40, cortex_fermi} option must be used in order to schedule a node with a GPU.<br />
We have installed the CUDA 6.5 driver and toolkit.<br />
<br />
In order to use CUDA, you have to load the CUDA environment:<br />
<br />
module load cuda<br />
<br />
=== Using Theano ===<br />
By default, Theano expects the default compiler to be gcc, so you'll need to unload the intel compiler.<br />
<br />
module unload intel<br />
<br />
Theano caches certain compiled libraries and these will sometimes cause errors when Theano gets updated. If you are experiencing problems with Theano, you can try clearing the cache with<br />
theano-cache clear<br />
and if you still have problems you can delete the .theano folder from your home directory.<br />
<br />
==== Using the GPU ====<br />
<br />
You must request a GPU node. The Anaconda Python distribution comes with a version of Theano that should work. If you need new Theano features, the development version of Theano can be obtained from the [https://github.com/Theano/Theano github repository], installed locally, and added to your PYTHONPATH if you are using the preinstalled Python verions. If you have a local python install you can install theano with<br />
python setup.py develop<br />
from the repository folder.<br />
Theano must be configured to use the GPU. General information can be found in the [http://deeplearning.net/software/theano/library/config.html Theano documentation], but a working (June 2015) version is to create a .theanorc file in your HOME directory with the contents:<br />
<br />
[global]<br />
root = /global/software/sl-6.x86_64/modules/langs/cuda/6.5/<br />
device = gpu<br />
floatX = float32<br />
force_device=True<br />
<br />
[nvcc]<br />
fastmath = True<br />
<br />
==== Using the CPU ====<br />
<br />
Theano can also run on the CPU. Any of the CPU nodes will work. You will want to have Theano build against the MKL BLAS library that comes with Anaconda and so your .theanorc might look like<br />
<br />
[global]<br />
device = cpu<br />
floatX = float32<br />
ldflags = -lmkl_rt<br />
<br />
=== Obtain GPU lock in python ===<br />
<br />
If you would like to use one of the GPU cards on node n0000 or n0001, please obtain a GPU lock to make sure the card is not in use and that no one else will be using the card. <br />
<br />
If you are using Python, you can obtain a GPU lock by running<br />
<br />
import gpu_lock<br />
gpu_lock.obtain_lock_id()<br />
<br />
The function either returns the number of the card you can use (0 or 1) or -1 if both cards are in use.<br />
<br />
=== Obtain GPU lock for Jacket in Matlab ===<br />
<br />
If you are using Matlab, you can obtain a GPU lock by running<br />
<br />
addpath('/clusterfs/cortex/software/gpu_lock');<br />
addpath('/clusterfs/cortex/software/jacket/engine');<br />
gpu_id = obtain_gpu_lock_id();<br />
gselect(gpu_id);<br />
<br />
By default, obtain_gpu_lock() throws an error when all gpu cards are taken.<br />
There is another option: obtain_gpu_lock_id(true) will return -1 in case there<br />
is no card available and you can then write your own code to deal with that<br />
fact.<br />
<br />
ginfo tells you which gpu card you are using.<br />
<br />
The following lines should also be in your .bashrc<br />
<br />
## jacket stuff!<br />
module load cuda<br />
export LD_LIBRARY_PATH=/clusterfs/cortex/software/jacket/engine/lib64:$LD_LIBRARY_PATH</div>Giselyhttps://rctn.org/w/index.php?title=Cluster&diff=8308Cluster2015-09-11T22:52:08Z<p>Gisely: /* Software */</p>
<hr />
<div>= General Information =<br />
<br />
The Redwood computing cluster consists of about a dozen somewhat heterogeneous machines, some with graphics cards (GPUs). The typical use cases for the cluster are that you have jobs that run in parallel which are independent, so having several machines will complete the task faster, even though any one machine might not be faster than your own laptop. Or you have a long running job which may take a day, and you don't want to worry about having to leave your laptop on at all times and not be able to use it. Another reason is that your code leverages a communication scheme (such as MPI) to have multiple machines cooperatively work on a problem. <br />
<br />
In order for the cluster to be useful and well-utilized, it works best for everyone to submit jobs TODO (see '''qsub''' further down on this page for the details) to the queue. A job may not start right away, but will get run once its turn comes. Please do not run extended interactive sessions or ssh directly to worker nodes for performing computation.<br />
<br />
[[ClusterAdmin]] has information about cluster administration.<br />
<br />
== Hardware Overview == <br />
<br />
The current hardware and node configuration is listed [https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/ucb-supercluster/cortex here].<br />
<br />
In addition to the compute nodes we own a file server TODO<br />
NetOp 4TB<br />
which is mounted as scratch space.<br />
<br />
== Getting an account and one-time password service == <br />
In order to get an account on the cluster, please send an email to Bruno (baolshausen AT berk...edu) with the following information:<br />
<br />
Full Name <emailaddress> desiredusername<br />
<br />
Please also include a note about which PI you are working with. Note: the '''desireusername''' must be 3-8 characters long, so it would have been truncated to '''desireus''' in this case.<br />
<br />
'''OTP (One Time Password) Service'''<br />
<br />
Once you have a username, you will need to follow the instructions found [https://commons.lbl.gov/display/itfaq/OTP+%28One+Time+Password%29+Service here] to set up the Pledge application, which gives you a one-time password for logging into the cluster (see '''Installing and Configuring the OTP Token''').<br />
<br />
== Directory setup ==<br />
<br />
=== Home Directory Quota ===<br />
<br />
There is a 10GB quota limit enforced on $HOME directory<br />
(/global/home/users/username) usage. Please keep your usage below<br />
this limit. There will be NETAPP snapshots in place in this file<br />
system so we suggest you store only your source code and scripts<br />
in this area and store all your data under /clusterfs/cortex<br />
(see below).<br />
<br />
In order to see your current quota and usage, use the following command: TODO<br />
<br />
quota -s<br />
<br />
=== Data ===<br />
<br />
For large amounts of data, please create a directory<br />
<br />
/clusterfs/cortex/scratch/username<br />
<br />
and store the data inside that directory. Note that unlike the home directory, scratch space is not backed up and permanence of your data is not guaranteed. There is a total limit of 4 TB for this drive that is shared by everyone at the Redwood center.<br />
<br />
== Connect ==<br />
<br />
==== Pledge App (get a password) ====<br />
<br />
* Run the pledge app and click "Generate one-time password"<br />
* Enter your PIN and press "Enter"<br />
* The application will present your 7 digit one time password<br />
<br />
=== ssh to a login node ===<br />
<br />
ssh -Y username@hpc.brc.berkeley.edu<br />
<br />
and use your one-time password.<br />
<br />
If you intend on working with a remote GUI session you can add a -C flag to the command above to enable compression data to be sent through the ssh tunnel.<br />
<br />
''' note: please don't use the login nodes for computations (e.g. matlab, python)! '''<br />
<br />
=== Setup environment ===<br />
<br />
* put all your customizations into your .bashrc <br />
* for login shells, .bash_profile is used, which in turn loads .bashrc<br />
<br />
=== Using a Windows machine ===<br />
Windows is not a Unix-based operating system and as a result does not natively interface with a Unix environment. Download the 2 following pieces of software to create a workaround:<br />
* Install a Unix environment emulator to interface directly with the cluster. Cygwin [http://www.cygwin.com] seems to work well. During installation make sure to install Net -> "openssh". Editors -> "vim" is also recommended. Then you can use the instructions detailed in ssh to gateway above<br />
* Install an SFTP/SCP/FTP client to allow for file sharing between the cluster and your local machine. WinSCP [http://www.winscp.net] is recommended. ExpanDrive can also be used to create a cluster-based network drive on your local machine.<br />
<br />
== Useful commands ==<br />
<br />
See https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/scheduler/ucb-supercluster-slurm-migration for a detailed FAQ on the SLURM job manager. <br />
<br />
Full description of our system by the LBL folks is at http://go.lbl.gov/hpcs-user-svcs/ucb-supercluster/cortex<br />
<br />
=== SLURM usage ===<br />
<br />
* Submitting a Job<br />
<br />
From the login node, you can submit jobs to the compute nodes using the syntax<br />
<br />
sbatch myscript.sh<br />
<br />
where the myscript.sh is an shell script containing the call to the executable to be submitted to the cluster. Typically, for a matlab job, it would look like<br />
<br />
#!/bin/bash -l<br />
#SBATCH -p cortex<br />
#SBATCH --time=03:30:00<br />
#SBATCH --mem-per-cpu=2G<br />
cd /clusterfs/cortex/scratch/working/dir/for/your/code<br />
module load matlab/R2013a<br />
matlab -nodisplay -nojvm -r "mymatlabfunction( parameters); exit"<br />
exit<br />
<br />
the --time defines the walltime of the job, which is an upper bound on the estimated runtime. The job will be killed after this time is elapsed. --mem specifies how much memory the job requires, the default is 1GB per job. <br />
<br />
* Monitoring Jobs <br />
<br />
Additional options can be passed to sbatch to monitor outputs from the running jobs<br />
<br />
sbatch -o outputfile.txt -e errofile.txt -J jobdescriptor myscript.sh<br />
<br />
the output of the job will be piped to outputfile.txt and any errors if the job crashes to errofile.txt<br />
<br />
* Cluster usage<br />
<br />
Use<br />
squeue<br />
to get a list of pending and running jobs on the cluster. It will show user names jobdescriptor passed to sbatch, runtime and nodes.<br />
<br />
=== Perceus commands ===<br />
<br />
The perceus manual is [http://www.warewulf-cluster.org/portal/book/export/html/7 here]<br />
<br />
* listing available cluster nodes:<br />
<br />
wwstats<br />
wwnodes<br />
<br />
* list cluster usage<br />
<br />
wwtop<br />
<br />
* to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc<br />
<br />
export NODES='*cortex'<br />
<br />
* module list<br />
* module avail<br />
* module help<br />
<br />
* help pages are [http://lrc.lbl.gov/html/guide.html here]<br />
<br />
=== Finding out the list of occupants on each cluster node ===<br />
<br />
* One can find out the list of users using a particular node by ssh into the node, e.g.<br />
<br />
ssh n0000.cortex<br />
<br />
* After logging into the node, type<br />
<br />
top<br />
<br />
* This is useful if you believe someone is abusing the machine and would like to send him/her a friendly reminder.<br />
<br />
= Usage Tips TODO =<br />
Here are some tips on how to effectively use the cluster.<br />
<br />
== Embarrassingly Parallel Submissions ==<br />
<br />
Here is an alternate script to do embarrassingly parallel submissions on the cluster.<br />
<br />
iterate.sh<br />
#!/bin/sh<br />
#Leap Size<br />
param1=11<br />
param2=1.2<br />
param3=.75<br />
#LeapSize<br />
for i in 14 15 16<br />
do<br />
#Epsilon<br />
for j in $(seq .8 .1 $param2);<br />
do<br />
#Beta<br />
for k in $(seq .65 .01 $param3);<br />
do<br />
echo $i,$j,$k<br />
qsub param_test.sh -v "LeapSize=$i,Epsilon=$j,Beta=$k"<br />
done<br />
done<br />
done<br />
<br />
param_test.sh<br />
#!/bin/bash<br />
#PBS -q cortex<br />
#PBS -l nodes=1:ppn=2:gpu<br />
#PBS -l walltime=10:35:00<br />
#PBS -o /global/home/users/mayur/Logs<br />
#PBS -e /global/home/users/mayur/Errors<br />
cd /global/home/users/mayur/HMC_reducedflip/<br />
module load matlab<br />
echo "Epsilon = ",$Epsilon<br />
echo "Leap Size = ",$LeapSize<br />
echo "Beta = ",$Beta<br />
matlab -nodisplay -nojvm -r "make_figures_fneval_cluster $LeapSize $Epsilon $Beta"<br />
<br />
Now run ./iterate.sh<br />
<br />
== Mounting Cluster File System ==<br />
Mounting the cluster file system remotely allows you to easily access files on the cluster, and allows you to use local programs to edit code or examine simulation outputs locally (very useful). I often edit the remote code using a text editor running on my local machine. This allows you to take advantage of the niceties of a native editor without having to copy code back and forth before you run a simulation on the cluster.<br />
<br />
On linux distributions you can mount your cluster home directory locally using sshfs [http://fuse.sourceforge.net/sshfs.html]<br />
<br />
sshfs hadley.berkeley.edu: <mount-dir><br />
<br />
On Mac and Windows machines the program ExpanDrive works well (uses Fuse under the hood): [http://www.expandrive.com]<br />
<br />
= Support Requests =<br />
<br />
* If you have a problem that is not covered on this page, you can send an email to our user list:<br />
<br />
[mailto:redwood_cluster@lists.berkeley.edu redwood_cluster@lists.berkeley.edu]<br />
<br />
* If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well. Or visit their website[https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/].<br />
<br />
[mailto:hpcshelp@lbl.gov hpcshelp@lbl.gov]<br />
<br />
* In urgent cases, you can also email [mailto:kmuriki@lbl.gov Krishna Muriki] (LBL User Services) directly.</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=8125Seminars2015-05-09T05:39:42Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
'''May 7, 2015'''<br />
* Speaker: Santani Teng<br />
* Affiliation: MIT<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''May 13, 2015'''<br />
* Speaker: Harri Valpola<br />
* Affiliation: ZenRobotics<br />
* Host: Brian<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract<br />
<br />
'TBD<br />
* Speaker: Andrew Ng<br />
* Affiliation: Stanford/Baidu<br />
* Host: Jesse Engel - Redwood<br />
* Status: Tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''TBD'''<br />
* Speaker: Allie Fletcher<br />
* Affiliation: UC Santa Cruz<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Scalable Identification for Structured Nonlinear Neural Systems<br />
* Abstract: TBA<br />
<br />
'''TBD'''<br />
* Speaker: Kanaka Rajan<br />
* Affiliation: Princeton University<br />
* Host: Jeff Teeters/Sarah Marzen<br />
* Status: was confirmed, needs to reschedule<br />
* Title: Generation of sequences through reconfiguration of ongoing activity in neural networks: A model of choice-specific cortical dynamics in virtual navigation<br />
* Abstract: TBA<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''30 Sep 2014'''<br />
* Speaker: Alejandro Bujan<br />
* Affiliation:<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations<br />
* Abstract: <br />
<br />
'''8 Oct 2014'''<br />
* Speaker: Siyu Zhang<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: confirmed<br />
* Title: Long-range and local circuits for top-down modulation of visual cortical processing<br />
* Abstract:<br />
<br />
'''15 Oct 2014'''<br />
* Speaker: Tamara Broderick<br />
* Affiliation: UC Berkeley<br />
* Host: Yvonne/James<br />
* Status: confirmed<br />
* Title: Feature allocations, probability functions, and paintboxes<br />
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.<br />
<br />
'''29 Oct 2014'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Topics in higher level visuo-motor control<br />
* Abstract: TBA<br />
<br />
'''5 Nov 2014''' - **BVLC retreat**<br />
<br />
'''20 Nov 2014'''<br />
* Speaker: Haruo Hasoya<br />
* Affiliation: ATR Institute, Japan<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''9 Dec 2014'''<br />
* Speaker: Dirk DeRidder<br />
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand<br />
* Host: Bruno/Walter Freeman<br />
* Status: confirmed<br />
* Title: The Bayesian brain, phantom percepts and brain implants<br />
* Abstract: TBA<br />
<br />
'''January 14, 2015'''<br />
* Speaker: Kevin O'regan<br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 26, 2015'''<br />
* Speaker: Abraham Peled<br />
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry<br />
* Abstract: TBA<br />
<br />
'''January 28, 2015'''<br />
* Speaker: Rich Ivry<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning<br />
* Abstract:<br />
<br />
'''February 11, 2015'''<br />
* Speaker: Mark Lescroart<br />
* Affiliation: UC Berkeley<br />
* Host: Karl<br />
* Status: tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''February 25, 2015'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Joint Redwood/CNEP seminar<br />
* Abstract:<br />
<br />
'''March 3, 2015'''<br />
* Speaker: Andreas Herz<br />
* Affiliation: Bernstein Center, Munich<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''March 3, 2015 - 4:00'''<br />
* Speaker: James Cooke<br />
* Affiliation: Oxford<br />
* Host: Mike Deweese<br />
* Status: confirmed<br />
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex<br />
* Abstract:<br />
<br />
'''March 4, 2015'''<br />
* Speaker: Bill Sprague<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: V1 disparity tuning and the statistics of disparity in natural viewing<br />
* Abstract:<br />
<br />
'''March 11, 2015'''<br />
* Speaker: Jozsef Fiser<br />
* Affiliation: Central European University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 1, 2015'''<br />
* Speaker: Saeed Saremi<br />
* Affiliation: Salk Inst<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''April 15, 2015'''<br />
* Speaker: Zahra M. Aghajan<br />
* Affiliation: UCLA<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hippocampal Activity in Real and Virtual Environments<br />
* Abstract:<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=People_at_the_Redwood_Center&diff=8105People at the Redwood Center2015-04-06T23:58:57Z<p>Gisely: </p>
<hr />
<div>__NOEDITSECTION__<br />
== Faculty ==<br />
<br />
[[Image:deweese.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Michael R. DeWeese''', Associate Professor, HWNI and Physics <br /><br />
[http://physics.berkeley.edu/people/faculty/michael-deweese Physics home page]<br />
<br />
<br />
[[Image:bruno.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Bruno Olshausen''', Director and Professor, HWNI and School of Optometry <br /><br />
[http://redwood.berkeley.edu/bruno home page]<br />
<br />
<br />
[[Image:fritz.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Fritz Sommer''', Acting Director and Associate Adjunct Professor, HWNI <br /><br />
[[Fritz Sommer|home page]]<br />
<br />
<br />
== Research Scientists ==<br />
<br />
[[Image:tony.jpg|170px|left]]<br />
<br style="clear:both;" /><br />
'''Tony Bell''' <br /><br />
[[Tony Bell|home page]]<br />
<br />
<br />
[[Image:tim.jpg]] <br />
<br style="clear:both;" /><br />
'''Tim Blanche''' <br /><br />
[[Tim Blanche|home page]]<br />
<br />
<br />
<br style="clear:both;" /><br />
'''Pentti Kanerva''' <br /><br />
[[Pentti Kanerva|home page]]<br />
<br />
<br />
[[Image:kilian.jpg|115px|left|link=Kilian Koepsell|Kilian Koepsell]]<br />
<br style="clear:both;" /><br />
'''Kilian Koepsell''' <br /><br />
[http://redwood.berkeley.edu/kilian/ home page] <br /><br />
[http://redwood.berkeley.edu/klab/ lab page]<br />
<br />
<br />
'''Vivienne L'Ecuyer Ming''', Visiting Scholar<br /><br />
[http://www.vivienneming.com home page]<br />
<br />
<br />
[[Image:bnfcjh.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Chris Hillar''' <br /><br />
[http://www.msri.org/people/members/chillar/ home page]<br />
<br />
<br />
[[Image:Allie_photo.jpeg]]<br />
<br style="clear:both;" /><br />
'''Alyson "Allie" Fletcher''' <br/><br />
Affiliated Researcher & UCSC Professor <br/><br />
[http://users.soe.ucsc.edu/~afletcher/ home page]<br />
<br />
<br style="clear:both;" /><br />
'''Karl Zipser''' <br/><br />
Assistant Researcher<br/><br />
[http://karlzipser.com home page]<br />
== Postdocs ==<br />
<br />
<br style="clear:both;" /><br />
'''Alex Bujan''' <br /><br />
[[AlexB | home page]]<br />
<br />
== Staff ==<br />
[[Image:JeffTeeters.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Jeff teeters''' <br /><br />
[[Jeff Teeters|home page]]<br />
<br />
<br />
== Students ==<br />
<br />
[[File:Alex.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Alex Anderson''' <br /><br />
[http://www.ocf.berkeley.edu/~aga/ Website]<br />
<br />
[[File:JamesA.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''James Arnemann''' <br /><br />
[[James Arnemann|home page]]<br />
<br />
'''Brian Cheung''' <br /><br />
[[Brian Cheung|home page]]<br />
<br />
'''Dylan Paiton''' <br /><br />
[http://vision.berkeley.edu/?p=3052 home page]<br />
<br />
[[File:yubei.png|150px|left]]<br />
<br style="clear:both;" /><br />
'''Yubei Chen''' <br /><br />
[[Yubei Chen|home page]]<br />
<br />
'''Guy Isely''' <br /><br />
[http://gisely.github.io/ home page]<br />
<br />
'''Mat Leonard''' <br /><br />
[[Mat Leonard|home page]]<br />
<br />
'''Jesse Livezey''' <br /><br />
[http://jesselivezey.com/ home page]<br />
<br />
'''Sarah Marzen''' <br /><br />
[[Sarah Marzen|home page]]<br />
<br />
[[File:Shariq222.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Shariq Mobin''' <br /><br />
[[Shariq Mobin|home page]]<br />
<br />
[[Image:mayur_mudigonda.jpg|alt=Mayur: the Magical Machine Learning Pony!|left]]<br />
<br style="clear:both;" /><br />
'''Mayur Mudigonda''' <br /><br />
[[http://redwood.berkeley.edu/mayur home page]]<br />
<br />
'''Chayut Thanapirom''' <br /><br />
[[Chayut Thanapirom|home page]]<br />
<br />
'''Joseph Thurakal''' <br /><br />
[[Joseph Thurakal|home page]]<br />
<br />
[[File:ogre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Chris Warner''' <br /><br />
[https://redwood.berkeley.edu/wiki/User:Cwarner home page]<br />
<br />
[[File:eric.JPG|150px|left]]<br />
<br style="clear:both;" /><br />
'''Eric Weiss''' <br /><br />
[[Eric Weiss|home page]]<br />
<br />
== Alumni ==<br />
<br />
<br style="clear:both;" /><br />
'''Urs Koester''' <br /><br />
[[Urs | home page]]<br />
<br />
[[File:Gautam.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Gautam Agarwal''' <br /><br />
Postdoc at the Champalimaud Centre for the Unknown<br /><br />
[http://www.neuro.fchampalimaud.org/en/research/investigators/research-groups/group/Mainen/ home page]<br />
<br />
[[Image:ian.jpg|130px|left]]<br />
<br style="clear:both;" /><br />
'''Ian Stevenson''' <br /><br />
Currently at University of Connecticut<br /><br />
[http://homepages.uconn.edu/~ias13002/ home page]<br />
<br />
[[Image:paul_ivanov.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Paul Ivanov''' <br /><br />
[http://pirsquared.org/ home page]<br />
<br />
'''Daniel Little''' <br /><br />
[[Daniel Little|home page]]<br />
<br />
'''Chris Rodgers''' <br /><br />
[http://chris-rodgers.com/ home page]<br />
<br />
'''Sharif Corinaldi''' <br /><br />
[[Sharif Corinaldi|home page]]<br />
<br />
'''Amir Khosrowshahi''' <br /><br />
[[Amir Khosrowshahi|home page]]<br />
<br />
'''Antony Lee''' <br /><br />
[[Antony Lee|home page]]<br />
<br />
'''Chetan Nandakumar''' <br /><br />
[[Chetan Nandakumar|home page]]<br />
<br />
'''Jascha Sohl-Dickstein''' <br /><br />
[[Jascha Sohl-Dickstein|home page]]<br />
<br />
[[Image:jiminy.gif|left]]<br />
<br style="clear:both;" /><br />
'''Jimmy Wang''' <br /><br />
[https://redwood.berkeley.edu/jwang/index.html home page]<br />
<br />
'''Joel Zylberberg''' <br /><br />
[[Joel Zylberberg|home page]]<br />
<br />
'''Jack Culpepper''' <br /><br />
[http://www.cs.berkeley.edu/~bjc/ home page]<br />
<br />
'''Badr Faisal Albanna''' <br /><br />
[[Badr Faisal Albanna|home page]]<br />
<br />
'''Peter Battaglino''' <br /><br />
[[Peter Battaglino|home page]]<br />
<br />
[[Image:nicole.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Nicole Carlson''' <br /><br />
[[Nicole Carlson|home page]]<br />
<br />
'''Alfonso Apicella''' <br /><br />
[[Alfonso Apicella|home page]]<br />
<br />
[[Image:charles.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Charles Cadieu''' <br /><br />
[http://charles.cadieu.us home page]<br />
<br />
[[Image:NicolFace.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Nicol Harper''' <br /><br />
[[Nicol Harper|home page]]<br />
<br />
'''Vivek Ayer''' <br /><br />
[[Vivek Ayer|home page]]<br />
<br />
[[Image:matthias.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Matthias Bethge''', Postdoctoral Fellow, 2005 <br /><br />
Currently at the Max-Planck Institute, Tubingen <br /><br />
Winner of the Bernstein Independent Investigator Prize <br /><br />
[http://www.neuro.uni-bremen.de/~mbethge/ home page]<br />
<br />
'''Will Coulter''', MA Neuroscience (2009) <br /><br />
Currently at Siemens Healthcare Diagnostics R&D <br /><br />
[[Will Coulter|home page]] (outdated)<br />
<br />
'''Mohammad Dastjerdi''' <br /><br />
[[Mohammad Dastjerdi|home page]]<br />
<br />
[[Image:pierre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Pierre Garrigues''' <br /><br />
[http://redwood.berkeley.edu/pierre home page]<br />
<br />
'''Joe Goldbeck''', MA Neuroscience (2012) <br /><br />
<br />
[[Image:thomas.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Thomas Lauritzen''', Research Scientist, 2005-2010 <br /><br />
Currently at Second Sight Medical Products, Sylmar, CA <br /><br />
[[Thomas Lauritzen|home page]]<br />
<br />
[[Image:Gianluca.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Gianluca Monaci''' <br /><br />
Currently at Philips Research, Eindhoven <br /><br />
[[Gianluca Monaci|home page]]<br />
<br />
'''Martin Rehn''' <br /><br />
[[Martin Rehn|home page]]<br />
<br />
[[Image:Rozell.jpg]]<br />
<br style="clear:both;" /><br />
'''Chris Rozell''' <br /><br />
[[http://www.ece.rice.edu/~crozell/ home page]]<br />
<br />
[[Image:Ivana.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Ivana Tosic''', Postdoctoral researcher, 2009-2011 <br /><br />
Currently with [http://rii.ricoh.com/ Ricoh Innovations Inc.], Menlo Park <br /><br />
[http://ivanatosic.net home page]</div>Giselyhttps://rctn.org/w/index.php?title=People_at_the_Redwood_Center&diff=8103People at the Redwood Center2015-04-06T23:57:47Z<p>Gisely: </p>
<hr />
<div>__NOEDITSECTION__<br />
== Faculty ==<br />
<br />
[[Image:deweese.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Michael R. DeWeese''', Assistant Professor, HWNI and Physics <br /><br />
[http://physics.berkeley.edu/people/faculty/michael-deweese Physics home page]<br />
<br />
<br />
[[Image:bruno.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Bruno Olshausen''', Director and Professor, HWNI and School of Optometry <br /><br />
[http://redwood.berkeley.edu/bruno home page]<br />
<br />
<br />
[[Image:fritz.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Fritz Sommer''', Acting Director and Associate Adjunct Professor, HWNI <br /><br />
[[Fritz Sommer|home page]]<br />
<br />
<br />
== Research Scientists ==<br />
<br />
[[Image:tony.jpg|170px|left]]<br />
<br style="clear:both;" /><br />
'''Tony Bell''' <br /><br />
[[Tony Bell|home page]]<br />
<br />
<br />
[[Image:tim.jpg]] <br />
<br style="clear:both;" /><br />
'''Tim Blanche''' <br /><br />
[[Tim Blanche|home page]]<br />
<br />
<br />
<br style="clear:both;" /><br />
'''Pentti Kanerva''' <br /><br />
[[Pentti Kanerva|home page]]<br />
<br />
<br />
[[Image:kilian.jpg|115px|left|link=Kilian Koepsell|Kilian Koepsell]]<br />
<br style="clear:both;" /><br />
'''Kilian Koepsell''' <br /><br />
[http://redwood.berkeley.edu/kilian/ home page] <br /><br />
[http://redwood.berkeley.edu/klab/ lab page]<br />
<br />
<br />
'''Vivienne L'Ecuyer Ming''', Visiting Scholar<br /><br />
[http://www.vivienneming.com home page]<br />
<br />
<br />
[[Image:bnfcjh.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Chris Hillar''' <br /><br />
[http://www.msri.org/people/members/chillar/ home page]<br />
<br />
<br />
[[Image:Allie_photo.jpeg]]<br />
<br style="clear:both;" /><br />
'''Alyson "Allie" Fletcher''' <br/><br />
Affiliated Researcher & UCSC Professor <br/><br />
[http://users.soe.ucsc.edu/~afletcher/ home page]<br />
<br />
<br style="clear:both;" /><br />
'''Karl Zipser''' <br/><br />
Assistant Researcher<br/><br />
[http://karlzipser.com home page]<br />
== Postdocs ==<br />
<br />
<br style="clear:both;" /><br />
'''Alex Bujan''' <br /><br />
[[AlexB | home page]]<br />
<br />
== Staff ==<br />
[[Image:JeffTeeters.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Jeff teeters''' <br /><br />
[[Jeff Teeters|home page]]<br />
<br />
<br />
== Students ==<br />
<br />
[[File:Alex.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Alex Anderson''' <br /><br />
[http://www.ocf.berkeley.edu/~aga/ Website]<br />
<br />
[[File:JamesA.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''James Arnemann''' <br /><br />
[[James Arnemann|home page]]<br />
<br />
'''Brian Cheung''' <br /><br />
[[Brian Cheung|home page]]<br />
<br />
'''Dylan Paiton''' <br /><br />
[http://vision.berkeley.edu/?p=3052 home page]<br />
<br />
[[File:yubei.png|150px|left]]<br />
<br style="clear:both;" /><br />
'''Yubei Chen''' <br /><br />
[[Yubei Chen|home page]]<br />
<br />
'''Guy Isely''' <br /><br />
[http://gisely.github.io/ home page]<br />
<br />
'''Mat Leonard''' <br /><br />
[[Mat Leonard|home page]]<br />
<br />
'''Jesse Livezey''' <br /><br />
[http://jesselivezey.com/ home page]<br />
<br />
'''Sarah Marzen''' <br /><br />
[[Sarah Marzen|home page]]<br />
<br />
[[File:Shariq222.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Shariq Mobin''' <br /><br />
[[Shariq Mobin|home page]]<br />
<br />
[[Image:mayur_mudigonda.jpg|alt=Mayur: the Magical Machine Learning Pony!|left]]<br />
<br style="clear:both;" /><br />
'''Mayur Mudigonda''' <br /><br />
[[http://redwood.berkeley.edu/mayur home page]]<br />
<br />
'''Chayut Thanapirom''' <br /><br />
[[Chayut Thanapirom|home page]]<br />
<br />
'''Joseph Thurakal''' <br /><br />
[[Joseph Thurakal|home page]]<br />
<br />
[[File:ogre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Chris Warner''' <br /><br />
[https://redwood.berkeley.edu/wiki/User:Cwarner home page]<br />
<br />
[[File:eric.JPG|150px|left]]<br />
<br style="clear:both;" /><br />
'''Eric Weiss''' <br /><br />
[[Eric Weiss|home page]]<br />
<br />
== Alumni ==<br />
<br />
<br style="clear:both;" /><br />
'''Urs Koester''' <br /><br />
[[Urs | home page]]<br />
<br />
[[File:Gautam.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Gautam Agarwal''' <br /><br />
Postdoc at the Champalimaud Centre for the Unknown<br /><br />
[http://www.neuro.fchampalimaud.org/en/research/investigators/research-groups/group/Mainen/ home page]<br />
<br />
[[Image:ian.jpg|130px|left]]<br />
<br style="clear:both;" /><br />
'''Ian Stevenson''' <br /><br />
Currently at University of Connecticut<br /><br />
[http://homepages.uconn.edu/~ias13002/ home page]<br />
<br />
[[Image:paul_ivanov.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Paul Ivanov''' <br /><br />
[http://pisquared.org/ | home page]<br />
<br />
'''Daniel Little''' <br /><br />
[[Daniel Little|home page]]<br />
<br />
'''Chris Rodgers''' <br /><br />
[http://chris-rodgers.com/ home page]<br />
<br />
'''Sharif Corinaldi''' <br /><br />
[[Sharif Corinaldi|home page]]<br />
<br />
'''Amir Khosrowshahi''' <br /><br />
[[Amir Khosrowshahi|home page]]<br />
<br />
'''Antony Lee''' <br /><br />
[[Antony Lee|home page]]<br />
<br />
'''Chetan Nandakumar''' <br /><br />
[[Chetan Nandakumar|home page]]<br />
<br />
'''Jascha Sohl-Dickstein''' <br /><br />
[[Jascha Sohl-Dickstein|home page]]<br />
<br />
[[Image:jiminy.gif|left]]<br />
<br style="clear:both;" /><br />
'''Jimmy Wang''' <br /><br />
[https://redwood.berkeley.edu/jwang/index.html home page]<br />
<br />
'''Joel Zylberberg''' <br /><br />
[[Joel Zylberberg|home page]]<br />
<br />
'''Jack Culpepper''' <br /><br />
[http://www.cs.berkeley.edu/~bjc/ home page]<br />
<br />
'''Badr Faisal Albanna''' <br /><br />
[[Badr Faisal Albanna|home page]]<br />
<br />
'''Peter Battaglino''' <br /><br />
[[Peter Battaglino|home page]]<br />
<br />
[[Image:nicole.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Nicole Carlson''' <br /><br />
[[Nicole Carlson|home page]]<br />
<br />
'''Alfonso Apicella''' <br /><br />
[[Alfonso Apicella|home page]]<br />
<br />
[[Image:charles.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Charles Cadieu''' <br /><br />
[http://charles.cadieu.us home page]<br />
<br />
[[Image:NicolFace.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Nicol Harper''' <br /><br />
[[Nicol Harper|home page]]<br />
<br />
'''Vivek Ayer''' <br /><br />
[[Vivek Ayer|home page]]<br />
<br />
[[Image:matthias.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Matthias Bethge''', Postdoctoral Fellow, 2005 <br /><br />
Currently at the Max-Planck Institute, Tubingen <br /><br />
Winner of the Bernstein Independent Investigator Prize <br /><br />
[http://www.neuro.uni-bremen.de/~mbethge/ home page]<br />
<br />
'''Will Coulter''', MA Neuroscience (2009) <br /><br />
Currently at Siemens Healthcare Diagnostics R&D <br /><br />
[[Will Coulter|home page]] (outdated)<br />
<br />
'''Mohammad Dastjerdi''' <br /><br />
[[Mohammad Dastjerdi|home page]]<br />
<br />
[[Image:pierre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Pierre Garrigues''' <br /><br />
[http://redwood.berkeley.edu/pierre home page]<br />
<br />
'''Joe Goldbeck''', MA Neuroscience (2012) <br /><br />
<br />
[[Image:thomas.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Thomas Lauritzen''', Research Scientist, 2005-2010 <br /><br />
Currently at Second Sight Medical Products, Sylmar, CA <br /><br />
[[Thomas Lauritzen|home page]]<br />
<br />
[[Image:Gianluca.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Gianluca Monaci''' <br /><br />
Currently at Philips Research, Eindhoven <br /><br />
[[Gianluca Monaci|home page]]<br />
<br />
'''Martin Rehn''' <br /><br />
[[Martin Rehn|home page]]<br />
<br />
[[Image:Rozell.jpg]]<br />
<br style="clear:both;" /><br />
'''Chris Rozell''' <br /><br />
[[http://www.ece.rice.edu/~crozell/ home page]]<br />
<br />
[[Image:Ivana.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Ivana Tosic''', Postdoctoral researcher, 2009-2011 <br /><br />
Currently with [http://rii.ricoh.com/ Ricoh Innovations Inc.], Menlo Park <br /><br />
[http://ivanatosic.net home page]</div>Giselyhttps://rctn.org/w/index.php?title=People_at_the_Redwood_Center&diff=8102People at the Redwood Center2015-04-06T23:57:09Z<p>Gisely: </p>
<hr />
<div>__NOEDITSECTION__<br />
== Faculty ==<br />
<br />
[[Image:deweese.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Michael R. DeWeese''', Assistant Professor, HWNI and Physics <br /><br />
[http://physics.berkeley.edu/people/faculty/michael-deweese Physics home page]<br />
<br />
<br />
[[Image:bruno.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Bruno Olshausen''', Director and Professor, HWNI and School of Optometry <br /><br />
[http://redwood.berkeley.edu/bruno home page]<br />
<br />
<br />
[[Image:fritz.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Fritz Sommer''', Acting Director and Associate Adjunct Professor, HWNI <br /><br />
[[Fritz Sommer|home page]]<br />
<br />
<br />
== Research Scientists ==<br />
<br />
[[Image:tony.jpg|170px|left]]<br />
<br style="clear:both;" /><br />
'''Tony Bell''' <br /><br />
[[Tony Bell|home page]]<br />
<br />
<br />
[[Image:tim.jpg]] <br />
<br style="clear:both;" /><br />
'''Tim Blanche''' <br /><br />
[[Tim Blanche|home page]]<br />
<br />
<br />
<br style="clear:both;" /><br />
'''Pentti Kanerva''' <br /><br />
[[Pentti Kanerva|home page]]<br />
<br />
<br />
[[Image:kilian.jpg|115px|left|link=Kilian Koepsell|Kilian Koepsell]]<br />
<br style="clear:both;" /><br />
'''Kilian Koepsell''' <br /><br />
[http://redwood.berkeley.edu/kilian/ home page] <br /><br />
[http://redwood.berkeley.edu/klab/ lab page]<br />
<br />
<br />
'''Vivienne L'Ecuyer Ming''', Visiting Scholar<br /><br />
[http://www.vivienneming.com home page]<br />
<br />
<br />
[[Image:bnfcjh.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Chris Hillar''' <br /><br />
[http://www.msri.org/people/members/chillar/ home page]<br />
<br />
<br />
[[Image:Allie_photo.jpeg]]<br />
<br style="clear:both;" /><br />
'''Alyson "Allie" Fletcher''' <br/><br />
Affiliated Researcher & UCSC Professor <br/><br />
[http://users.soe.ucsc.edu/~afletcher/ home page]<br />
<br />
<br style="clear:both;" /><br />
'''Karl Zipser''' <br/><br />
Assistant Researcher<br/><br />
[http://karlzipser.com home page]<br />
== Postdocs ==<br />
<br />
<br style="clear:both;" /><br />
'''Alex Bujan''' <br /><br />
[[AlexB | home page]]<br />
<br />
== Staff ==<br />
[[Image:JeffTeeters.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Jeff teeters''' <br /><br />
[[Jeff Teeters|home page]]<br />
<br />
<br />
== Students ==<br />
<br />
[[File:Alex.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Alex Anderson''' <br /><br />
[http://www.ocf.berkeley.edu/~aga/ Website]<br />
<br />
[[File:JamesA.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''James Arnemann''' <br /><br />
[[James Arnemann|home page]]<br />
<br />
'''Brian Cheung''' <br /><br />
[[Brian Cheung|home page]]<br />
<br />
'''Dylan Paiton''' <br /><br />
[http://vision.berkeley.edu/?p=3052 home page]<br />
<br />
[[File:yubei.png|150px|left]]<br />
<br style="clear:both;" /><br />
'''Yubei Chen''' <br /><br />
[[Yubei Chen|home page]]<br />
<br />
'''Guy Isely''' <br /><br />
[http://gisely.github.io/ home page]<br />
<br />
'''Mat Leonard''' <br /><br />
[[Mat Leonard|home page]]<br />
<br />
'''Jesse Livezey''' <br /><br />
[http://jesselivezey.com/ home page]<br />
<br />
'''Sarah Marzen''' <br /><br />
[[Sarah Marzen|home page]]<br />
<br />
[[File:Shariq222.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Shariq Mobin''' <br /><br />
[[Shariq Mobin|home page]]<br />
<br />
[[Image:mayur_mudigonda.jpg|alt=Mayur: the Magical Machine Learning Pony!|left]]<br />
<br style="clear:both;" /><br />
'''Mayur Mudigonda''' <br /><br />
[[http://redwood.berkeley.edu/mayur home page]]<br />
<br />
'''Chayut Thanapirom''' <br /><br />
[[Chayut Thanapirom|home page]]<br />
<br />
'''Joseph Thurakal''' <br /><br />
[[Joseph Thurakal|home page]]<br />
<br />
[[File:ogre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Chris Warner''' <br /><br />
[https://redwood.berkeley.edu/wiki/User:Cwarner home page]<br />
<br />
[[File:eric.JPG|150px|left]]<br />
<br style="clear:both;" /><br />
'''Eric Weiss''' <br /><br />
[[Eric Weiss|home page]]<br />
<br />
== Alumni ==<br />
<br />
<br style="clear:both;" /><br />
'''Urs Koester''' <br /><br />
[[Urs | home page]]<br />
<br />
[[File:Gautam.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Gautam Agarwal''' <br /><br />
Postdoc at the Champalimaud Centre for the Unknown<br /><br />
[http://www.neuro.fchampalimaud.org/en/research/investigators/research-groups/group/Mainen/ home page]<br />
<br />
[[Image:ian.jpg|130px|left]]<br />
<br style="clear:both;" /><br />
'''Ian Stevenson''' <br /><br />
Currently at University of Connecticut<br /><br />
[http://homepages.uconn.edu/~ias13002/ home page]<br />
<br />
[[Image:paul_ivanov.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Paul Ivanov''' <br /><br />
[http://pisquared.org| home page]<br />
<br />
'''Daniel Little''' <br /><br />
[[Daniel Little|home page]]<br />
<br />
'''Chris Rodgers''' <br /><br />
[http://chris-rodgers.com/ home page]<br />
<br />
'''Sharif Corinaldi''' <br /><br />
[[Sharif Corinaldi|home page]]<br />
<br />
'''Amir Khosrowshahi''' <br /><br />
[[Amir Khosrowshahi|home page]]<br />
<br />
'''Antony Lee''' <br /><br />
[[Antony Lee|home page]]<br />
<br />
'''Chetan Nandakumar''' <br /><br />
[[Chetan Nandakumar|home page]]<br />
<br />
'''Jascha Sohl-Dickstein''' <br /><br />
[[Jascha Sohl-Dickstein|home page]]<br />
<br />
[[Image:jiminy.gif|left]]<br />
<br style="clear:both;" /><br />
'''Jimmy Wang''' <br /><br />
[https://redwood.berkeley.edu/jwang/index.html home page]<br />
<br />
'''Joel Zylberberg''' <br /><br />
[[Joel Zylberberg|home page]]<br />
<br />
'''Jack Culpepper''' <br /><br />
[http://www.cs.berkeley.edu/~bjc/ home page]<br />
<br />
'''Badr Faisal Albanna''' <br /><br />
[[Badr Faisal Albanna|home page]]<br />
<br />
'''Peter Battaglino''' <br /><br />
[[Peter Battaglino|home page]]<br />
<br />
[[Image:nicole.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Nicole Carlson''' <br /><br />
[[Nicole Carlson|home page]]<br />
<br />
'''Alfonso Apicella''' <br /><br />
[[Alfonso Apicella|home page]]<br />
<br />
[[Image:charles.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Charles Cadieu''' <br /><br />
[http://charles.cadieu.us home page]<br />
<br />
[[Image:NicolFace.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Nicol Harper''' <br /><br />
[[Nicol Harper|home page]]<br />
<br />
'''Vivek Ayer''' <br /><br />
[[Vivek Ayer|home page]]<br />
<br />
[[Image:matthias.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Matthias Bethge''', Postdoctoral Fellow, 2005 <br /><br />
Currently at the Max-Planck Institute, Tubingen <br /><br />
Winner of the Bernstein Independent Investigator Prize <br /><br />
[http://www.neuro.uni-bremen.de/~mbethge/ home page]<br />
<br />
'''Will Coulter''', MA Neuroscience (2009) <br /><br />
Currently at Siemens Healthcare Diagnostics R&D <br /><br />
[[Will Coulter|home page]] (outdated)<br />
<br />
'''Mohammad Dastjerdi''' <br /><br />
[[Mohammad Dastjerdi|home page]]<br />
<br />
[[Image:pierre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Pierre Garrigues''' <br /><br />
[http://redwood.berkeley.edu/pierre home page]<br />
<br />
'''Joe Goldbeck''', MA Neuroscience (2012) <br /><br />
<br />
[[Image:thomas.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Thomas Lauritzen''', Research Scientist, 2005-2010 <br /><br />
Currently at Second Sight Medical Products, Sylmar, CA <br /><br />
[[Thomas Lauritzen|home page]]<br />
<br />
[[Image:Gianluca.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Gianluca Monaci''' <br /><br />
Currently at Philips Research, Eindhoven <br /><br />
[[Gianluca Monaci|home page]]<br />
<br />
'''Martin Rehn''' <br /><br />
[[Martin Rehn|home page]]<br />
<br />
[[Image:Rozell.jpg]]<br />
<br style="clear:both;" /><br />
'''Chris Rozell''' <br /><br />
[[http://www.ece.rice.edu/~crozell/ home page]]<br />
<br />
[[Image:Ivana.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Ivana Tosic''', Postdoctoral researcher, 2009-2011 <br /><br />
Currently with [http://rii.ricoh.com/ Ricoh Innovations Inc.], Menlo Park <br /><br />
[http://ivanatosic.net home page]</div>Giselyhttps://rctn.org/w/index.php?title=People_at_the_Redwood_Center&diff=8101People at the Redwood Center2015-04-06T23:56:23Z<p>Gisely: </p>
<hr />
<div>__NOEDITSECTION__<br />
== Faculty ==<br />
<br />
[[Image:deweese.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Michael R. DeWeese''', Assistant Professor, HWNI and Physics <br /><br />
[http://physics.berkeley.edu/people/faculty/michael-deweese Physics home page]<br />
<br />
<br />
[[Image:bruno.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Bruno Olshausen''', Director and Professor, HWNI and School of Optometry <br /><br />
[http://redwood.berkeley.edu/bruno home page]<br />
<br />
<br />
[[Image:fritz.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Fritz Sommer''', Acting Director and Associate Adjunct Professor, HWNI <br /><br />
[[Fritz Sommer|home page]]<br />
<br />
<br />
== Research Scientists ==<br />
<br />
[[Image:tony.jpg|170px|left]]<br />
<br style="clear:both;" /><br />
'''Tony Bell''' <br /><br />
[[Tony Bell|home page]]<br />
<br />
<br />
[[Image:tim.jpg]] <br />
<br style="clear:both;" /><br />
'''Tim Blanche''' <br /><br />
[[Tim Blanche|home page]]<br />
<br />
<br />
<br style="clear:both;" /><br />
'''Pentti Kanerva''' <br /><br />
[[Pentti Kanerva|home page]]<br />
<br />
<br />
[[Image:kilian.jpg|115px|left|link=Kilian Koepsell|Kilian Koepsell]]<br />
<br style="clear:both;" /><br />
'''Kilian Koepsell''' <br /><br />
[http://redwood.berkeley.edu/kilian/ home page] <br /><br />
[http://redwood.berkeley.edu/klab/ lab page]<br />
<br />
<br />
'''Vivienne L'Ecuyer Ming''', Visiting Scholar<br /><br />
[http://www.vivienneming.com home page]<br />
<br />
<br />
[[Image:bnfcjh.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Chris Hillar''' <br /><br />
[http://www.msri.org/people/members/chillar/ home page]<br />
<br />
<br />
[[Image:Allie_photo.jpeg]]<br />
<br style="clear:both;" /><br />
'''Alyson "Allie" Fletcher''' <br/><br />
Affiliated Researcher & UCSC Professor <br/><br />
[http://users.soe.ucsc.edu/~afletcher/ home page]<br />
<br />
<br style="clear:both;" /><br />
'''Karl Zipser''' <br/><br />
Assistant Researcher<br/><br />
[http://karlzipser.com home page]<br />
== Postdocs ==<br />
<br />
<br style="clear:both;" /><br />
'''Alex Bujan''' <br /><br />
[[AlexB | home page]]<br />
<br />
== Staff ==<br />
[[Image:JeffTeeters.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Jeff teeters''' <br /><br />
[[Jeff Teeters|home page]]<br />
<br />
<br />
== Students ==<br />
<br />
[[File:Alex.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Alex Anderson''' <br /><br />
[http://www.ocf.berkeley.edu/~aga/ Website]<br />
<br />
[[File:JamesA.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''James Arnemann''' <br /><br />
[[James Arnemann|home page]]<br />
<br />
'''Brian Cheung''' <br /><br />
[[Brian Cheung|home page]]<br />
<br />
'''Dylan Paiton''' <br /><br />
[http://vision.berkeley.edu/?p=3052 home page]<br />
<br />
[[File:yubei.png|150px|left]]<br />
<br style="clear:both;" /><br />
'''Yubei Chen''' <br /><br />
[[Yubei Chen|home page]]<br />
<br />
'''Guy Isely''' <br /><br />
[http://gisely.github.io/ home page]<br />
<br />
'''Mat Leonard''' <br /><br />
[[Mat Leonard|home page]]<br />
<br />
'''Jesse Livezey''' <br /><br />
[http://jesselivezey.com/ home page]<br />
<br />
'''Sarah Marzen''' <br /><br />
[[Sarah Marzen|home page]]<br />
<br />
[[File:Shariq222.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Shariq Mobin''' <br /><br />
[[Shariq Mobin|home page]]<br />
<br />
[[Image:mayur_mudigonda.jpg|alt=Mayur: the Magical Machine Learning Pony!|left]]<br />
<br style="clear:both;" /><br />
'''Mayur Mudigonda''' <br /><br />
[[http://redwood.berkeley.edu/mayur home page]]<br />
<br />
'''Chayut Thanapirom''' <br /><br />
[[Chayut Thanapirom|home page]]<br />
<br />
'''Joseph Thurakal''' <br /><br />
[[Joseph Thurakal|home page]]<br />
<br />
[[File:ogre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Chris Warner''' <br /><br />
[https://redwood.berkeley.edu/wiki/User:Cwarner home page]<br />
<br />
[[File:eric.JPG|150px|left]]<br />
<br style="clear:both;" /><br />
'''Eric Weiss''' <br /><br />
[[Eric Weiss|home page]]<br />
<br />
== Alumni ==<br />
<br />
<br style="clear:both;" /><br />
'''Urs Koester''' <br /><br />
[[Urs | home page]]<br />
<br />
[[File:Gautam.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Gautam Agarwal''' <br /><br />
Postdoc at the Champalimaud Centre for the Unknown<br /><br />
[http://www.neuro.fchampalimaud.org/en/research/investigators/research-groups/group/Mainen/ home page]<br />
<br />
[[Image:ian.jpg|130px|left]]<br />
<br style="clear:both;" /><br />
'''Ian Stevenson''' <br /><br />
Currently at University of Connecticut<br /><br />
[http://homepages.uconn.edu/~ias13002/ home page]<br />
<br />
[[Image:paul_ivanov.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Paul Ivanov''' <br /><br />
[http://pisquared.org|home page]<br />
<br />
'''Daniel Little''' <br /><br />
[[Daniel Little|home page]]<br />
<br />
'''Chris Rodgers''' <br /><br />
[http://chris-rodgers.com/ home page]<br />
<br />
'''Sharif Corinaldi''' <br /><br />
[[Sharif Corinaldi|home page]]<br />
<br />
'''Amir Khosrowshahi''' <br /><br />
[[Amir Khosrowshahi|home page]]<br />
<br />
'''Antony Lee''' <br /><br />
[[Antony Lee|home page]]<br />
<br />
'''Chetan Nandakumar''' <br /><br />
[[Chetan Nandakumar|home page]]<br />
<br />
'''Jascha Sohl-Dickstein''' <br /><br />
[[Jascha Sohl-Dickstein|home page]]<br />
<br />
[[Image:jiminy.gif|left]]<br />
<br style="clear:both;" /><br />
'''Jimmy Wang''' <br /><br />
[https://redwood.berkeley.edu/jwang/index.html home page]<br />
<br />
'''Joel Zylberberg''' <br /><br />
[[Joel Zylberberg|home page]]<br />
<br />
'''Jack Culpepper''' <br /><br />
[http://www.cs.berkeley.edu/~bjc/ home page]<br />
<br />
'''Badr Faisal Albanna''' <br /><br />
[[Badr Faisal Albanna|home page]]<br />
<br />
'''Peter Battaglino''' <br /><br />
[[Peter Battaglino|home page]]<br />
<br />
[[Image:nicole.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Nicole Carlson''' <br /><br />
[[Nicole Carlson|home page]]<br />
<br />
'''Alfonso Apicella''' <br /><br />
[[Alfonso Apicella|home page]]<br />
<br />
[[Image:charles.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Charles Cadieu''' <br /><br />
[http://charles.cadieu.us home page]<br />
<br />
[[Image:NicolFace.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Nicol Harper''' <br /><br />
[[Nicol Harper|home page]]<br />
<br />
'''Vivek Ayer''' <br /><br />
[[Vivek Ayer|home page]]<br />
<br />
[[Image:matthias.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Matthias Bethge''', Postdoctoral Fellow, 2005 <br /><br />
Currently at the Max-Planck Institute, Tubingen <br /><br />
Winner of the Bernstein Independent Investigator Prize <br /><br />
[http://www.neuro.uni-bremen.de/~mbethge/ home page]<br />
<br />
'''Will Coulter''', MA Neuroscience (2009) <br /><br />
Currently at Siemens Healthcare Diagnostics R&D <br /><br />
[[Will Coulter|home page]] (outdated)<br />
<br />
'''Mohammad Dastjerdi''' <br /><br />
[[Mohammad Dastjerdi|home page]]<br />
<br />
[[Image:pierre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Pierre Garrigues''' <br /><br />
[http://redwood.berkeley.edu/pierre home page]<br />
<br />
'''Joe Goldbeck''', MA Neuroscience (2012) <br /><br />
<br />
[[Image:thomas.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Thomas Lauritzen''', Research Scientist, 2005-2010 <br /><br />
Currently at Second Sight Medical Products, Sylmar, CA <br /><br />
[[Thomas Lauritzen|home page]]<br />
<br />
[[Image:Gianluca.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Gianluca Monaci''' <br /><br />
Currently at Philips Research, Eindhoven <br /><br />
[[Gianluca Monaci|home page]]<br />
<br />
'''Martin Rehn''' <br /><br />
[[Martin Rehn|home page]]<br />
<br />
[[Image:Rozell.jpg]]<br />
<br style="clear:both;" /><br />
'''Chris Rozell''' <br /><br />
[[http://www.ece.rice.edu/~crozell/ home page]]<br />
<br />
[[Image:Ivana.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Ivana Tosic''', Postdoctoral researcher, 2009-2011 <br /><br />
Currently with [http://rii.ricoh.com/ Ricoh Innovations Inc.], Menlo Park <br /><br />
[http://ivanatosic.net home page]</div>Giselyhttps://rctn.org/w/index.php?title=People_at_the_Redwood_Center&diff=8005People at the Redwood Center2014-12-08T08:47:47Z<p>Gisely: </p>
<hr />
<div>__NOEDITSECTION__<br />
== Faculty ==<br />
<br />
[[Image:deweese.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Michael R. DeWeese''', Assistant Professor, HWNI and Physics <br /><br />
[http://physics.berkeley.edu/people/faculty/michael-deweese Physics home page]<br />
<br />
<br />
[[Image:bruno.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Bruno Olshausen''', Director and Professor, HWNI and School of Optometry <br /><br />
[http://redwood.berkeley.edu/bruno home page]<br />
<br />
<br />
[[Image:fritz.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Fritz Sommer''', Acting Director and Associate Adjunct Professor, HWNI <br /><br />
[[Fritz Sommer|home page]]<br />
<br />
<br />
== Research Scientists ==<br />
<br />
[[Image:tony.jpg|170px|left]]<br />
<br style="clear:both;" /><br />
'''Tony Bell''' <br /><br />
[[Tony Bell|home page]]<br />
<br />
<br />
[[Image:tim.jpg]] <br />
<br style="clear:both;" /><br />
'''Tim Blanche''' <br /><br />
[[Tim Blanche|home page]]<br />
<br />
<br />
<br style="clear:both;" /><br />
'''Pentti Kanerva''' <br /><br />
[[Pentti Kanerva|home page]]<br />
<br />
<br />
[[Image:kilian.jpg|115px|left|link=Kilian Koepsell|Kilian Koepsell]]<br />
<br style="clear:both;" /><br />
'''Kilian Koepsell''' <br /><br />
[http://redwood.berkeley.edu/kilian/ home page] <br /><br />
[http://redwood.berkeley.edu/klab/ lab page]<br />
<br />
<br />
'''Vivienne L'Ecuyer Ming''', Visiting Scholar<br /><br />
[http://www.vivienneming.com home page]<br />
<br />
<br />
[[Image:bnfcjh.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Chris Hillar''' <br /><br />
[http://www.msri.org/people/members/chillar/ home page]<br />
<br />
<br />
[[Image:Allie_photo.jpeg]]<br />
<br style="clear:both;" /><br />
'''Alyson "Allie" Fletcher''' <br/><br />
Affiliated Researcher & UCSC Professor <br/><br />
[http://users.soe.ucsc.edu/~afletcher/ home page]<br />
<br />
<br style="clear:both;" /><br />
'''Karl Zipser''' <br/><br />
Assistant Researcher<br/><br />
[http://karlzipser.com home page]<br />
== Postdocs ==<br />
<br />
<br />
<br style="clear:both;" /><br />
'''Urs Koester''' <br /><br />
[[Urs | home page]]<br />
<br />
<br />
== Staff ==<br />
[[Image:JeffTeeters.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Jeff teeters''' <br /><br />
[[Jeff Teeters|home page]]<br />
<br />
<br />
== Students ==<br />
<br />
[[File:JamesA.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''James Arnemann''' <br /><br />
[[James Arnemann|home page]]<br />
<br />
'''Brian Cheung''' <br /><br />
[[Brian Cheung|home page]]<br />
<br />
[[File:yubei.png|150px|left]]<br />
<br style="clear:both;" /><br />
'''Yubei Chen''' <br /><br />
[[Yubei Chen|home page]]<br />
<br />
'''Guy Isely''' <br /><br />
[http://gisely.github.io/ home page]<br />
<br />
'''Mat Leonard''' <br /><br />
[[Mat Leonard|home page]]<br />
<br />
'''Jesse Livezey''' <br /><br />
[http://jesselivezey.com/ home page]<br />
<br />
'''Sarah Marzen''' <br /><br />
[[Sarah Marzen|home page]]<br />
<br />
[[Image:mayur_mudigonda.jpg|alt=Mayur: the Magical Machine Learning Pony!|left]]<br />
<br style="clear:both;" /><br />
'''Mayur Mudigonda''' <br /><br />
[[http://redwood.berkeley.edu/mayur home page]]<br />
<br />
'''Chayut Thanapirom''' <br /><br />
[[Chayut Thanapirom|home page]]<br />
<br />
[[File:ogre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Chris Warner''' <br /><br />
[https://redwood.berkeley.edu/wiki/User:Cwarner home page]<br />
<br />
[[File:eric.JPG|150px|left]]<br />
<br style="clear:both;" /><br />
'''Eric Weiss''' <br /><br />
[[Eric Weiss|home page]]<br />
<br />
[[File:Shariq222.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Shariq Mobin''' <br /><br />
[[Shariq Mobin|home page]]<br />
<br />
[[File:Alex.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Alex Anderson''' <br /><br />
[http://www.ocf.berkeley.edu/~aga/ Website]<br />
<br />
== Alumni ==<br />
<br />
[[File:Gautam.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Gautam Agarwal''' <br /><br />
Postdoc at the Champalimaud Centre for the Unknown<br /><br />
[http://www.neuro.fchampalimaud.org/en/research/investigators/research-groups/group/Mainen/ home page]<br />
<br />
[[Image:ian.jpg|130px|left]]<br />
<br style="clear:both;" /><br />
'''Ian Stevenson''' <br /><br />
Currently at University of Connecticut<br /><br />
[http://homepages.uconn.edu/~ias13002/ home page]<br />
<br />
[[Image:paul_ivanov.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Paul Ivanov''' <br /><br />
<br />
'''Daniel Little''' <br /><br />
[[Daniel Little|home page]]<br />
<br />
'''Chris Rodgers''' <br /><br />
[[Chris Rodgers|home page]]<br />
<br />
'''Sharif Corinaldi''' <br /><br />
[[Sharif Corinaldi|home page]]<br />
<br />
'''Amir Khosrowshahi''' <br /><br />
[[Amir Khosrowshahi|home page]]<br />
<br />
'''Antony Lee''' <br /><br />
[[Antony Lee|home page]]<br />
<br />
'''Chetan Nandakumar''' <br /><br />
[[Chetan Nandakumar|home page]]<br />
<br />
'''Jascha Sohl-Dickstein''' <br /><br />
[[Jascha Sohl-Dickstein|home page]]<br />
<br />
[[Image:jiminy.gif|left]]<br />
<br style="clear:both;" /><br />
'''Jimmy Wang''' <br /><br />
[https://redwood.berkeley.edu/jwang/index.html home page]<br />
<br />
'''Joel Zylberberg''' <br /><br />
[[Joel Zylberberg|home page]]<br />
<br />
'''Jack Culpepper''' <br /><br />
[http://www.cs.berkeley.edu/~bjc/ home page]<br />
<br />
'''Badr Faisal Albanna''' <br /><br />
[[Badr Faisal Albanna|home page]]<br />
<br />
'''Peter Battaglino''' <br /><br />
[[Peter Battaglino|home page]]<br />
<br />
[[Image:nicole.jpg|left]]<br />
<br style="clear:both;" /><br />
'''Nicole Carlson''' <br /><br />
[[Nicole Carlson|home page]]<br />
<br />
'''Alfonso Apicella''' <br /><br />
[[Alfonso Apicella|home page]]<br />
<br />
[[Image:charles.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Charles Cadieu''' <br /><br />
[http://charles.cadieu.us home page]<br />
<br />
[[Image:NicolFace.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Nicol Harper''' <br /><br />
[[Nicol Harper|home page]]<br />
<br />
'''Vivek Ayer''' <br /><br />
[[Vivek Ayer|home page]]<br />
<br />
[[Image:matthias.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Matthias Bethge''', Postdoctoral Fellow, 2005 <br /><br />
Currently at the Max-Planck Institute, Tubingen <br /><br />
Winner of the Bernstein Independent Investigator Prize <br /><br />
[http://www.neuro.uni-bremen.de/~mbethge/ home page]<br />
<br />
'''Will Coulter''', MA Neuroscience (2009) <br /><br />
Currently at Siemens Healthcare Diagnostics R&D <br /><br />
[[Will Coulter|home page]] (outdated)<br />
<br />
'''Mohammad Dastjerdi''' <br /><br />
[[Mohammad Dastjerdi|home page]]<br />
<br />
[[Image:pierre.jpg|150px|left]]<br />
<br style="clear:both;" /><br />
'''Pierre Garrigues''' <br /><br />
[http://redwood.berkeley.edu/pierre home page]<br />
<br />
'''Joe Goldbeck''', MA Neuroscience (2012) <br /><br />
<br />
[[Image:thomas.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Thomas Lauritzen''', Research Scientist, 2005-2010 <br /><br />
Currently at Second Sight Medical Products, Sylmar, CA <br /><br />
[[Thomas Lauritzen|home page]]<br />
<br />
[[Image:Gianluca.jpg|115px|left]]<br />
<br style="clear:both;" /><br />
'''Gianluca Monaci''' <br /><br />
Currently at Philips Research, Eindhoven <br /><br />
[[Gianluca Monaci|home page]]<br />
<br />
'''Martin Rehn''' <br /><br />
[[Martin Rehn|home page]]<br />
<br />
[[Image:Rozell.jpg]]<br />
<br style="clear:both;" /><br />
'''Chris Rozell''' <br /><br />
[[http://www.ece.rice.edu/~crozell/ home page]]<br />
<br />
[[Image:Ivana.jpg|left]] <br />
<br style="clear:both;" /><br />
'''Ivana Tosic''', Postdoctoral researcher, 2009-2011 <br /><br />
Currently with [http://rii.ricoh.com/ Ricoh Innovations Inc.], Menlo Park <br /><br />
[http://ivanatosic.net home page]</div>Giselyhttps://rctn.org/w/index.php?title=Main_Page&diff=7819Main Page2014-10-02T21:05:13Z<p>Gisely: /* Public Website Pages */</p>
<hr />
<div>The Redwood Center '''homepage''' is [http://redwood.berkeley.edu/ here].<br />
<br />
== News ==<br />
<br />
The [[Redwood Center Inaugural Symposium DVD]] is online!<br />
<br />
== Remarks for new users ==<br />
<br />
# You can use the 'preferences' link at the top of the page to configure this wiki. In particular, you might want to change the 'skin' to ''MonoBookRedwood'' in order to take full advantage of the wiki functionality<br />
# Please see the [http://meta.wikipedia.org/wiki/MediaWiki_User%27s_Guide User's Guide] for usage and configuration help.<br />
# The [http://meta.wikimedia.org/wiki/MediaWiki_FAQ FAQ] is also useful.<br />
# You might also want to sign up on our [[mailinglist]]<br />
<br />
Have fun!<br />
--[[User:Kilian|Kilian]] 16:45, 8 November 2005 (PST)<br />
<br />
== Public Website Pages ==<br />
<br />
# Here is the page for the '''[[Mission and Research]]''' of the RedwoodCenter!<br />
# Here is the wiki version of the [[People at the Redwood Center]] page.<br />
# Here is the page for editing [[Publications]]<br />
<br />
== Internal Organization ==<br />
<br />
# [[Lab meeting schedule]] for our Friday lab meetings.<br />
# Here is the [[Seminars]] page for the organization of our weekly seminar.<br />
# Here is the wiki page for [[TCN|Topics in Computational Neuroscience (TCN)]] journal club.<br />
# Here is the wiki page for [[OMJC|Topics in Ocular Motion and Stable Visual Perception (OM)]] journal club.<br />
# Internal [http://www.google.com/calendar/embed?src=5fvgm6d1nfflb7hcvcc5acbt7s%40group.calendar.google.com&ctz=America/Los_Angeles Calendar] used for Conference Room reservation. See [[Calendars]] for information on how to login to add another reservation or event.<br />
# There are also calendars for the overflow rooms Evans [http://goo.gl/G4mO5N 550] and [http://goo.gl/suIbHN 552] if you'd like to reserve them.<br />
# We have tea time with cookies everyday (except Friday) at 3:30. Here are the [[Tea Time Notes]].<br />
# Here are some [[Mentorship Resources|resources on mentorship]]. Consider serving as mentor-- it's an excellent way to give back.<br />
<br />
== Redwood Resources ==<br />
<br />
# [[Cluster]] - information on using the cortex cluster<br />
# [[Mailinglist]] - has all of the redwood email lists with descriptions<br />
# Instructions for using the RES [[Purchasing System]].<br />
# Here is a resource for presenting our Redwood identity [[Logos]]<br />
# Here is a page with information how to get an [[RSS feed]] for our web pages ([[Image:RSS.gif|RSS]]).<br />
# [[VS298: Neural Computation]] course, Fall 08.<br />
# Who is in charge of X? [[Responsibilities]]<br />
# [[NON-EQUILIBRIUM CLUB]] [[Essential Reading]]<br />
# [[Brain_Network_Dynamics_2007]]<br />
<br />
== Older pages (possibly in need of update) ==<br />
<br />
# old course page [[VS298 (Fall 06): Neural Computation]].<br />
# Here is the table about [[Graphical Models]] we discussed in the journal club.<br />
# Here is a page to suggest [[Purchase Items]] (i.e. books).<br />
# [[Summer Courses]] related to computational/theoretical neuroscience.<br />
# [[Cosyne 2007 Workshops]]<br />
# Photos from the [[Pt Reyes hike]] (July 29, 2006), and the [[Mt Diablo hike]] (August 25, 2006).<br />
# [[Redwood Center Meeting 2008]]</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=7600Seminars2014-08-27T18:48:01Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.<br />
<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''10 Nov 2014'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose - Redwood/CNEP<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''TBD'''<br />
* Speaker: Allie Fletcher<br />
* Affiliation: UC Santa Cruz<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Scalable Identification for Structured Nonlinear Neural Systems<br />
* Abstract: TBA<br />
<br />
'''TBD'''<br />
* Speaker: Kanaka Rajan<br />
* Affiliation: Princeton University<br />
* Host: Jeff Teeters/Sarah Marzen<br />
* Status: was confirmed, needs to reschedule<br />
* Title: Generation of sequences through reconfiguration of ongoing activity in neural networks: A model of choice-specific cortical dynamics in virtual navigation<br />
* Abstract: TBA<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Seminars&diff=7599Seminars2014-08-27T18:47:01Z<p>Gisely: /* Tentative / Confirmed Speakers */</p>
<hr />
<div>== Instructions ==<br />
<br />
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.<br />
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host'' in case somebody wants to contact you.<br />
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.<br />
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.<br />
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.<br />
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.<br />
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.<br />
<br />
== Tentative / Confirmed Speakers ==<br />
<br />
'''2 Sept 2014'''<br />
* Speaker: Oriol Vinyals <br />
* Affliciation: Google<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Machine Translation with Long-Short Term Memory Models<br />
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art.<br />
<br />
This is joint work with Ilya Sutskever and Quoc Le.<br />
'''19 Sept 2014'''<br />
* Speaker: Gary Marcus<br />
* Affiliation: NYU<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''24 Sept 2014'''<br />
* Speaker: Alyosha Efros<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''10 Nov 2014'''<br />
* Speaker: Steve Chase<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose - Redwood/CNEP<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''January 21, 2015'''<br />
* Speaker: Adrienne Fairhall<br />
* Affiliation: University of Washington<br />
* Host: Mike Schachter<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''TBD'''<br />
* Speaker: Allie Fletcher<br />
* Affiliation: UC Santa Cruz<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Scalable Identification for Structured Nonlinear Neural Systems<br />
* Abstract: TBA<br />
<br />
'''TBD'''<br />
* Speaker: Kanaka Rajan<br />
* Affiliation: Princeton University<br />
* Host: Jeff Teeters/Sarah Marzen<br />
* Status: was confirmed, needs to reschedule<br />
* Title: Generation of sequences through reconfiguration of ongoing activity in neural networks: A model of choice-specific cortical dynamics in virtual navigation<br />
* Abstract: TBA<br />
<br />
== Previous Seminars ==<br />
<br />
=== 2014/15 academic year ===<br />
<br />
'''2 July 2014'''<br />
* Speaker: Kelly Clancy<br />
* Affiliation: Feldman lab<br />
* Host: Guy<br />
* Status: confirmed<br />
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices<br />
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.<br />
<br />
'''23 July 2014'''<br />
* Speaker: Gautam Agarwal<br />
* Affiliation: UC Berkeley/Champalimaud<br />
* Host: Friedrich Sommer<br />
* Status: confirmed<br />
* Title: Unsolved Mysteries of Hippocampal Dynamics<br />
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.<br />
<br />
'''6 Aug 2014'''<br />
* Speaker: Georg Martius<br />
* Affiliation: Max Planck Institute, Leipzig<br />
* Host: Fritz Sommer<br />
* Status: confirmed<br />
* Title: Information driven self-organization of robotic behavior<br />
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and<br />
artificial systems is seen in the ability for independent<br />
exploration. In animals and humans, the ability to modify its own<br />
pattern of activity is not only an indispensable trait for adaptation<br />
and survival in new situations, it also provides a learning system<br />
with novel information for improving its cognitive capabilities, and<br />
it is essential for development. Efficient exploration in<br />
high-dimensional spaces is a major challenge in building learning<br />
systems. We propose to implement the exploration as a deterministic<br />
law derived from maximizing an information quantity. More<br />
specifically we use the predictive information of the sensor process<br />
(of a robot) to obtain an update rule (exploration dynamics) of the<br />
controller parameters. To be adequate in robotics application the<br />
non-stationary nature of the underlying time-series have to be taken<br />
into account, which we do by proposing the time-local predictive<br />
information (TiPI). Importantly the exploration dynamics is derived<br />
analytically and by this we link information theory and dynamical<br />
systems. Without a random component the change in the parameters is<br />
deterministically given as a function of the states in a certain time<br />
window. For an embodied system this means in particular that<br />
constraints, responses and current knowledge of the dynamical<br />
interaction with the environment can directly be used to advance<br />
further exploration. Randomness is replaced with spontaneity which we<br />
demonstrate to restrict the search space automatically to the<br />
physically relevant dimensions. Its effectiveness will be<br />
presented with various experiments on high-dimensional robotic system<br />
and we argue that this is a promising way to avoid the curse of<br />
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.<br />
<br />
'''15 Aug 2014'''<br />
* Speaker: Juergen Schmidhuber<br />
* Affiliation: IDSIA, Switzerland<br />
* Host: James/Shariq<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2013/14 academic year ===<br />
<br />
'''9 Oct 2013'''<br />
* Speaker: Ekaterina Brocke<br />
* Affiliation: KTH University, Stockholm, Sweden<br />
* Host: Tony<br />
* Status: confirmed<br />
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.<br />
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.<br />
<br />
'''29 Oct 2013 - note: 4:00'''<br />
* Speaker: Mitya Chkolovskii<br />
* Affiliation: HHMI/Janelia Farm<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 Oct 2013'''<br />
* Speaker: Ilya Nemanman<br />
* Affiliation: Emory University, Departments of Physics and Biology<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Large N in neural data -- expecting the unexpected.<br />
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.<br />
<br />
'''31 Oct 2013'''<br />
* Speaker: Oriol Vinyals<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno/Brian<br />
* Status: confirmed<br />
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning<br />
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.<br />
<br />
'''6 Nov 2013'''<br />
* Speaker: Garrett T. Kenyon<br />
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium<br />
* Host: Dylan Paiton<br />
* Status: Confirmed<br />
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions<br />
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.<br />
<br />
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''<br />
* Speaker: Geoffrey J Goodhill<br />
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Computational principles of neural wiring development<br />
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.<br />
<br />
'''4 Dec 2013'''<br />
* Speaker: Zhenwen Dai<br />
* Affiliation: FIAS, Goethe University Frankfurt, Germany.<br />
* Host: Georgios Exarchakis<br />
* Status: Confirmed<br />
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach <br />
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. <br />
<br />
'''11 Dec 2013'''<br />
* Speaker: Kai Siedenburg<br />
* Affiliation: UC Davis, Petr Janata's Lab.<br />
* Host: Jesse Engel<br />
* Status: Confirmed<br />
* Title: Characterizing Short-Term Memory for Musical Timbre<br />
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.<br />
<br />
'''12 Dec 2013'''<br />
* Speaker: Matthias Bethge<br />
* Affiliation: University of Tubingen<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 Jan 2014'''<br />
* Speaker: Thomas Martinetz<br />
* Affiliation: Univ Luebeck<br />
* Host: Bruno/Fritz<br />
* Status: confirmed<br />
* Title: Orthogonal Sparse Coding and Sensing<br />
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.<br />
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a <br />
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision. <br />
<br />
'''29 Jan 2014'''<br />
* Speaker: David Klein<br />
* Affiliation: Audience<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)<br />
<br />
'''12 Feb 2014'''<br />
* Speaker: Ilya Sutskever <br />
* Affiliation: Google<br />
* Host: Zayd<br />
* Status: confirmed<br />
* Title: Continuous vector representations for machine translation<br />
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.<br />
<br />
'''25 Feb 2014'''<br />
* Speaker: Alexander Terekhov <br />
* Affiliation: CNRS - Université Paris Descartes<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies<br />
* Abstract:<br />
<br />
'''12 March 2014'''<br />
* Speaker: Carlos Portera-Cailliau<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: Circuit defects in the neocortex of Fmr1 knockout mice<br />
* Abstract: TBA<br />
<br />
'''19 March 2014'''<br />
* Speaker: Dean Buonomano<br />
* Affiliation: UCLA<br />
* Host: Mike<br />
* Status: confirmed<br />
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity<br />
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.<br />
<br />
'''26 March 2014'''<br />
* Speaker: Robert G. Smith<br />
* Affiliation: University of Pennsylvania<br />
* Host: Mike S<br />
* Status: confirmed<br />
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina<br />
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.<br />
<br />
'''16 April 2014'''<br />
* Speaker: David Pfau<br />
* Affiliation: Columbia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 April 2014 *Tuesday*'''<br />
* Speaker: Jochen Braun<br />
* Affiliation: Otto-von-Guericke University, Magdeburg<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Dynamics of visual perception and collective neural activity<br />
* Abstract:<br />
<br />
'''29 April 2014'''<br />
* Speaker: Guiseppe Vitiello<br />
* Affiliation: University of Salerno<br />
* Host: Fritz/Walter Freeman<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''30 April 2014'''<br />
* Speaker: Masataka Watanabe<br />
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics<br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis<br />
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])<br />
<br />
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.<br />
<br />
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.<br />
<br />
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.<br />
<br />
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.<br />
<br />
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.<br />
<br />
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.<br />
<br />
'''11 June 2014'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona, Tucson<br />
* Host: Gautam<br />
* Status: confirmed<br />
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations <br />
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders. <br />
<br />
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329<br />
<br />
'''25 June 2014'''<br />
* Speaker: Peter Loxley<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system<br />
* Abstract: TBA<br />
<br />
=== 2012/13 academic year ===<br />
<br />
'''26 Sept 2012''' <br />
* Speaker: Jason Yeatman<br />
* Affiliation: Department of Psychology, Stanford University<br />
* Host: Bruno/Susana Chung<br />
* Status: confirmed<br />
* Title: The Development of White Matter and Reading Skills<br />
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.<br />
<br />
'''8 Oct 2012''' <br />
* Speaker: Sophie Deneve<br />
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Balanced spiking networks can implement dynamical systems with predictive coding<br />
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.<br />
<br />
<br />
'''19 Oct 2012'''<br />
* Speaker: Gert Van Dijck<br />
* Affiliation: Cambridge<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach<br />
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.<br />
<br />
'''Tuesday, 23 Oct 2012''' <br />
* Speaker: Jaimie Sleigh<br />
* Affiliation: University of Auckland<br />
* Host: Fritz/Andrew Szeri<br />
* Status: confirmed<br />
* Title: Is General Anesthesia a failure of cortical information integration<br />
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.<br />
<br />
'''31 Oct 2012''' (Halloween)<br />
* Speaker: Jonathan Landy<br />
* Affiliation: UCSB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Mean-field replica theory: review of basics and a new approach<br />
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.<br />
<br />
'''7 Nov 2012''' <br />
* Speaker: Tom Griffiths<br />
* Affiliation: UC Berkeley<br />
* Host:Daniel Little<br />
* Status: Confirmed<br />
* Title: Identifying human inductive biases<br />
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.<br />
<br />
'''19 Nov 2012''' (Monday) (Thanksgiving week)<br />
* Speaker: Bin Yu<br />
* Affiliation: Dept. of Statistics and EECS, UC Berkeley<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Representation of Natural Images in V4<br />
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.<br />
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.<br />
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver<br />
and J. Gallant.)<br />
<br />
'''30 Nov 2012''' <br />
* Speaker: Yan Karklin<br />
* Affiliation: NYU<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: <br />
* Abstract: <br />
<br />
'''10 Dec 2012 (note this would be the Monday after NIPS)''' <br />
* Speaker: Marius Pachitariu<br />
* Affiliation: Gatsby / UCL<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: NIPS paper "Learning visual motion in recurrent neural networks"<br />
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a<br />
latent representation of binary-gated Gaussian variables connected in a network. <br />
Trained on sequences of images by an STDP-like rule the model learns <br />
to represent different movement directions in different variables. We use an online <br />
approximate inference scheme that can be mapped to the dynamics of networks <br />
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons <br />
in the model show patterns of responses analogous to those of direction-selective <br />
simple cells in primary visual cortex. We show how the computations of the model <br />
are enabled by a specific pattern of learnt asymmetric recurrent connections. <br />
I will also briefly discuss our application of recurrent neural networks as statistical <br />
models of simultaneously recorded spiking neurons. <br />
<br />
'''12 Dec 2012''' <br />
* Speaker: Ian Goodfellow<br />
* Affiliation: U Montreal<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''7 Jan 2013'''<br />
* Speaker: Stuart Hammeroff<br />
* Affiliation: University of Arizona <br />
* Host: Gautam Agarwal<br />
* Status: confirmed<br />
* Title: Quantum cognition and brain microtubules <br />
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.<br />
<br />
'''Monday 14 Jan 2013, 1:00pm'''<br />
* Speaker: Dibyendu Mandal <br />
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: An exactly solvable model of Maxwell’s demon<br />
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.<br />
<br />
'''23 Jan 2013'''<br />
* Speaker: Carlos Brody<br />
* Affiliation: Princeton<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Neural substrates of decision-making in the rat<br />
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.<br />
<br />
'''28 Jan 2013'''<br />
* Speaker: Eugene M. Izhikevich<br />
* Affiliation: Brain Corporation<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Spikes<br />
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Goren Gordon<br />
* Affiliation: Weizman Intitute<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics<br />
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.<br />
<br />
'''29 Jan 2013'''<br />
* Speaker: Jenny Read<br />
* Affiliation: Institute of Neuroscience, Newcastle University<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: Stereoscopic vision<br />
* Abstract: [To be written]<br />
<br />
'''7 Feb 2013'''<br />
* Speaker: Valero Laparra<br />
* Affiliation: University of Valencia<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Empirical statistical analysis of phases in Gabor filtered natural images<br />
* Abstract:<br />
<br />
'''20 Feb 2013'''<br />
* Speaker: Dolores Bozovic<br />
* Affiliation: UCLA<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: Bifurcations and phase-locking dynamics in the auditory system<br />
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.<br />
<br />
'''27 March 2013'''<br />
* Speaker: Dale Purves<br />
* Affiliation: Duke<br />
* Host: Sarah<br />
* Status: confirmed<br />
* Title: How Visual Evolution Determines What We See<br />
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.<br />
<br />
'''9 April 2013'''<br />
* Speaker: Mounya Elhilali<br />
* Affiliation: Johns Hopkins<br />
* Host: Tyler<br />
* Status: confirmed<br />
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis<br />
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.<br />
<br />
'''17th of April 2013'''<br />
* Speaker: Wiktor Młynarski<br />
* Affiliation: Max Planck Institute for Mathematics in the Sciences<br />
* Host: Urs<br />
* Status: confirmed<br />
* Title: Statistical Models of Binaural Sounds<br />
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.<br />
<br />
'''15 May 2013'''<br />
* Speaker: Byron Yu<br />
* Affiliation: CMU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
'''22 May 2013'''<br />
* Speaker: Bijan Pesaran<br />
* Affiliation: NYU<br />
* Host: Bruno/Jose (jointly sponsored with CNEP)<br />
* Status: confirmed <br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
=== 2011/12 academic year ===<br />
<br />
'''15 Sep 2011 (Thursday, at noon)'''<br />
* Speaker: Kathrin Berkner<br />
* Affiliation: Ricoh Innovations Inc.<br />
* Host: Ivana Tosic<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''21 Sep 2011'''<br />
* Speaker: Mike Kilgard<br />
* Affiliation: UT Dallas<br />
* Host: Michael Silver<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''27 Sep 2011'''<br />
* Speaker: Moshe Gur<br />
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology<br />
* Host: Bruno/Stan<br />
* Status: Confirmed<br />
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?<br />
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.<br />
<br />
'''5 Oct 2011'''<br />
* Speaker: Susanne Still<br />
* Affiliation: University of Hawaii at Manoa<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium<br />
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.<br />
<br />
'''19 Oct 2011'''<br />
* Speaker: Graham Cummins<br />
* Affiliation: WSU<br />
* Host: Jeff Teeters<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''26 Oct 2011'''<br />
* Speaker: Shinji Nishimoto<br />
* Affiliation: Gallant lab, UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''14 Dec 2011'''<br />
* Speaker: Austin Roorda<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: How the unstable eye sees a stable and moving world<br />
* Abstract:<br />
<br />
'''11 Jan 2012'''<br />
* Speaker: Ken Nakayama<br />
* Affiliation: Harvard University<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Subjective Contours<br />
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).<br />
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over.<br />
Subjective contours, however, remain as vivid as ever, even more so.<br />
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.<br />
<br />
'''Tuesday, 24 Jan 2012'''<br />
* Speaker: Aniruddha Das<br />
* Affiliation: Columbia University<br />
* Host: Fritz<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''22 Feb 2012'''<br />
* Speaker: Elad Schneidman <br />
* Affiliation: Department of Neurobiology, Weizmann Institute of Science<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Sparse high order interaction networks underlie learnable neural population codes<br />
* Abstract:<br />
<br />
'''29 Feb 2012 (at noon as usual)'''<br />
* Speaker: Heather Read<br />
* Affiliation: U. Connecticut<br />
* Host: Mike DeWeese<br />
* Status: confirmed<br />
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"<br />
* Abstract: TBD<br />
<br />
'''1 Mar 2012 (note: Thurs)'''<br />
* Speaker: Daniel Zoran<br />
* Affiliation: Hebrew University, Jerusalem<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 Mar 2012'''<br />
* Speaker: David Sivak<br />
* Affiliation: UCB<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''8 Mar 2012'''<br />
* Speaker: Ivan Schwab<br />
* Affiliation: UC Davis<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Evolution's Witness: How Eyes Evolved<br />
* Abstract:<br />
<br />
'''14 Mar 2012'''<br />
* Speaker: David Sussillo<br />
* Affiliation:<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 April 2012'''<br />
* Speaker: Kristofer Bouchard<br />
* Affiliation: UCSF<br />
* Host: Bruno<br />
* Status: confirmed<br />
* Title: Cortical Foundations of Human Speech Production<br />
* Abstract:<br />
<br />
'''23 May 2012''' (rescheduled from April 11)<br />
* Speaker: Logan Grosenick<br />
* Affiliation: Stanford, Deisseroth & Suppes Labs<br />
* Host: Jascha<br />
* Status: confirmed<br />
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics<br />
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics. <br />
<br />
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.<br />
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.<br />
<br />
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.<br />
<br />
'''7 June 2012''' (Thursday)<br />
* Speaker: Mitya Chklovskii<br />
* Affiliation: janelia<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract<br />
<br />
'''27 June 2012''' <br />
* Speaker: Jerry Feldman<br />
* Affiliation:<br />
* Host: Bruno<br />
* Status:<br />
* Title:<br />
* Abstract:<br />
<br />
'''30 July 2012''' <br />
* Speaker: Lucas Theis<br />
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Hierarchical models of natural images<br />
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.<br />
<br />
(joint work with Reshad Hosseini and Matthias Bethge)<br />
<br />
=== 2010/11 academic year ===<br />
<br />
'''02 Sep 2010'''<br />
* Speaker: Johannes Burge<br />
* Affiliation: University of Texas at Austin<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 Sep 2010'''<br />
* Speaker: Tobi Szuts<br />
* Affiliation: Meister Lab/ Harvard U.<br />
* Host: Mike DeWeese<br />
* Status: Confirmed<br />
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.<br />
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.<br />
<br />
'''29 Sep 2010'''<br />
* Speaker: Vikash Gilja<br />
* Affiliation: Stanford University<br />
* Host: Charles<br />
* Status: Confirmed<br />
* Title: Towards Clinically Viable Neural Prosthetic Systems.<br />
* Abstract:<br />
<br />
'''20 Oct 2010'''<br />
* Speaker: Alexandre Francois<br />
* Affiliation: USC<br />
* Host: <br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 Nov 2010'''<br />
* Speaker: Eric Jonas and Vikash Mansinghka<br />
* Affiliation: Navia Systems<br />
* Host: Jascha<br />
* Status: Confirmed<br />
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications<br />
* Abstract: Complex probabilistic models and Bayesian inference are becoming<br />
increasingly critical across science and industry, especially in<br />
large-scale data analysis. They are also central to our best<br />
computational accounts of human cognition, perception and action.<br />
However, all these efforts struggle with the infamous curse of<br />
dimensionality. Rich probabilistic models can seem hard to write and<br />
even harder to solve, as specifying and calculating probabilities<br />
often appears to require the manipulation of exponentially (and<br />
sometimes infinitely) large tables of numbers.<br />
<br />
We argue that these difficulties reflect a basic mismatch between the<br />
needs of probabilistic reasoning and the deterministic, functional<br />
orientation of our current hardware, programming languages and CS<br />
theory. To mitigate these issues, we have been developing a stack of<br />
abstractions for natively probabilistic computation, based around<br />
stochastic simulators (or samplers) for distributions, rather than<br />
evaluators for deterministic functions. Ultimately, our aim is to<br />
produce a model of computation and the associated hardware and<br />
programming tools that are as suited for uncertain inference and<br />
decision-making as our current computers are for precise arithmetic.<br />
<br />
In this talk, we will give an overview of the entire stack of<br />
abstractions supporting natively probabilistic computation, with<br />
technical detail on several hardware and software artifacts we have<br />
implemented so far. we will also touch on some new theoretical results<br />
regarding the computational complexity of probabilistic programs.<br />
Throughout, we will motivate and connect this work to some current<br />
applications in biomedical data analysis and computer vision, as well<br />
as potential hypotheses regarding the implementation of probabilistic<br />
computation in the brain.<br />
<br />
This talk includes joint work with Keith Bonawitz, Beau Cronin,<br />
Cameron Freer, Daniel Roy and Joshua Tenenbaum.<br />
<br />
BRIEF BIOGRAPHY<br />
<br />
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a<br />
venture-funded startup company building natively probabilistic<br />
computing machines. He spent 10 years at MIT, eventually earning an<br />
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer<br />
Science, and a PhD in Computation. He held graduate fellowships from<br />
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won<br />
the 2009 MIT George M. Sprowls award for best dissertation in computer<br />
science. He currently serves on DARPA's Information Science and<br />
Technology (ISAT) Study Group.<br />
<br />
Eric Jonas is a co-founder of Navia Systems, responsible for in-house<br />
accelerated inference research and development. He spent ten years at<br />
MIT, where he earned SB degrees in electrical engineering and computer<br />
science and neurobiology, an MEng in EECS, with a neurobiology PhD<br />
expected really soon. He’s passionate about biological applications<br />
of probabilistic reasoning and hopes to use Navia’s capabilities to<br />
combine data from biological science, clinical histories, and patient<br />
outcomes into seamless models.<br />
<br />
'''8 Nov 2010'''<br />
* Speaker: Patrick Ruther<br />
* Affiliation: Imtek, University of Freiburg<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract: TBD<br />
<br />
'''10 Nov 2010'''<br />
* Speaker: Aurel Lazar<br />
* Affiliation: Department of Electrical Engineering, Columbia University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons<br />
* Abstract: We first present a general framework for the reconstruction of natural video<br />
scenes encoded with a population of spiking neural circuits with random thresholds.<br />
The visual encoding system consists of a bank of filters, modeling the visual<br />
receptive fields, in cascade with a population of neural circuits, modeling encoding<br />
with spikes in the early visual system.<br />
The neuron models considered include integrate-and-fire neurons and ON-OFF<br />
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed<br />
to be random. We show that for both time-varying and space-time-varying stimuli neural<br />
spike encoding is akin to taking noisy measurements on the stimulus.<br />
Second, we formulate the reconstruction problem as the minimization of a<br />
suitable cost functional in a finite-dimensional vector space and provide an explicit<br />
algorithm for stimulus recovery. We also present a general solution using the theory of<br />
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both<br />
synthetic video as well as for natural scenes and show that the quality of the<br />
reconstruction degrades gracefully as the threshold variability of the neurons increases.<br />
Third, we demonstrate a number of simple operations on the original visual stimulus<br />
including translations, rotations and zooming. All these operations are natively executed<br />
in the spike domain. The processed spike trains are decoded for the faithful recovery<br />
of the stimulus and its transformations.<br />
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley<br />
neurons.<br />
References:<br />
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,<br />
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,<br />
Special Issue on Mathematical Models of Visual Coding,<br />
http://dx.doi.org/10.1016/j.visres.2010.03.015<br />
Aurel A. Lazar,<br />
Population Encoding with Hodgkin-Huxley Neurons,<br />
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,<br />
Special Issue on Molecular Biology and Neuroscience,<br />
http://dx.doi.org/10.1109/TIT.2009.2037040<br />
<br />
'''11 Nov 2010''' (UCB holiday)<br />
* Speaker: Martha Nari Havenith<br />
* Affiliation: UCL<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?<br />
* Abstract:<br />
<br />
'''19 Nov 2010''' (note: on Friday because of SFN)<br />
* Speaker: Dan Butts<br />
* Affiliation: UMD<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: Common roles of inhibition in visual and auditory processing.<br />
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.<br />
<br />
'''24 Nov 2010'''<br />
* Speaker: Eizaburo Doi<br />
* Affiliation: NYU<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''29 Nov 2010 - informal talk'''<br />
* Speaker: Eero Lehtonen<br />
* Affiliation: UTU Finland<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Memristors<br />
* Abstract:<br />
<br />
'''1 Dec 2010'''<br />
* Speaker: Gadi Geiger<br />
* Affiliation: MIT<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics<br />
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.<br />
<br />
<br />
'''13 Dec 2010'''<br />
* Speaker: Jorg Lueke<br />
* Affiliation: FIAS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data<br />
* Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.<br />
<br />
'''15 Dec 2010'''<br />
* Speaker: Claudia Clopath<br />
* Affiliation: Universite Paris Decartes<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
<br />
'''18 Jan 2011'''<br />
* Speaker: Siwei Lyu<br />
* Affiliation: Computer Science Department, University at Albany, SUNY<br />
* Host: Bruno<br />
* Status: confirmed <br />
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation<br />
* Abstract:<br />
<br />
'''19 Jan 2011'''<br />
* Speaker: David Field (informal talk)<br />
* Affiliation: <br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''25 Jan 2011'''<br />
* Speaker: Ruth Rosenholtz<br />
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT<br />
* Host: Bruno<br />
* Status: Confirmed <br />
* Title: What your visual system sees where you are not looking<br />
* Abstract:<br />
<br />
'''26 Jan 2011'''<br />
* Speaker: Ernst Niebur<br />
* Affiliation: Johns Hopkins U<br />
* Host: Fritz<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''16 March 2011'''<br />
* Speaker: Vladimir Itskov<br />
* Affiliation: University of Nebraska-Lincoln<br />
* Host: Chris<br />
* Status: Confirmed <br />
* Title: <br />
* Abstract:<br />
<br />
'''23 March 2011'''<br />
* Speaker: Bruce Cumming<br />
* Affiliation: National Institutes of Health<br />
* Host: Ivana<br />
* Status: Confirmed<br />
* Title: TBD<br />
* Abstract:<br />
<br />
'''27 April 2011'''<br />
* Speaker: Lubomir Bourdev<br />
* Affiliation: Computer Science, UC Berkeley<br />
* Host:Bruno<br />
* Status: Confirmed<br />
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"<br />
* Abstract:<br />
<br />
'''12 May 2011 (note: Thursday)'''<br />
* Speaker: Jack Culpepper<br />
* Affiliation: Redwood Center/EECS<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''26 May 2011'''<br />
* Speaker: Ian Stevenson<br />
* Affiliation: Northwestern University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Explaining tuning curves by estimating interactions between neurons<br />
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.<br />
<br />
'''1 June 2011'''<br />
* Speaker: Michael Oliver<br />
* Affiliation: Gallant lab<br />
* Host: Bruno<br />
* Status: Tentative <br />
* Title: <br />
* Abstract:<br />
<br />
'''8 June 2011'''<br />
* Speaker: Alyson Fletcher<br />
* Affiliation: UC Berkeley<br />
* Host: Bruno<br />
* Status: tentative<br />
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity<br />
* Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.<br />
<br />
=== 2009/10 academic year ===<br />
<br />
'''2 September 2009''' <br />
* Speaker: Keith Godfrey<br />
* Affiliation: University of Cambridge<br />
* Host: Tim<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract:<br />
<br />
'''7 October 2009'''<br />
* Speaker: Anita Schmid<br />
* Affiliation: Cornell University<br />
* Host: Kilian<br />
* Status: Confirmed<br />
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time<br />
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.<br />
<br />
'''28 October 2009'''<br />
* Speaker: Andrea Benucci<br />
* Affiliation: Institute of Ophthalmology, University College London<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex<br />
* Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.<br />
<br />
'''12 November 2009 (Thursday)'''<br />
* Speaker: Song-Chun Zhu<br />
* Affiliation: UCLA<br />
* Host: Jimmy<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''18 November 2009'''<br />
* Speaker: Dan Graham<br />
* Affiliation: Dept. of Mathematics, Dartmouth College<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: The Packet-Switching Brain: A Hypothesis<br />
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.<br />
<br />
'''16 December 2009'''<br />
* Speaker: Pietro Berkes<br />
* Affiliation: Volen Center for Complex Systems, Brandeis University<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Generative models of vision: from sparse coding toward structured models<br />
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.<br />
<br />
'''6 January 2010'''<br />
* Speaker: Susanne Still<br />
* Affiliation: U of Hawaii<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''20 January 2010'''<br />
* Speaker: Tom Dean<br />
* Affiliation: Google<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors<br />
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.<br />
<br />
'''27 January 2010'''<br />
* Speaker: David Philiponna<br />
* Affiliation: Paris<br />
* Host: Bruno<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
''''24 Feburary 2010'''<br />
* Speaker: Gordon Pipa<br />
* Affiliation: U Osnabrueck/MPI Frankfurt<br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''3 March 2010'''<br />
* Speaker: Gaute Einevoll<br />
* Affiliation: UMB, Norway<br />
* Host: Amir<br />
* Status: Confirmed<br />
* Title: TBA<br />
* Abstract: TBA<br />
<br />
<br />
'''4 March 2010'''<br />
* Speaker: Harvey Swadlow<br />
* Affiliation: <br />
* Host: Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''8 April 2010'''<br />
* Speaker: Alan Yuille <br />
* Affiliation: UCLA<br />
* Host: Amir<br />
* Status: Confirmed (for 1pm)<br />
* Title: <br />
* Abstract:<br />
<br />
'''28 April 2010'''<br />
* Speaker: Dharmendra Modha - cancelled<br />
* Affiliation: IBM<br />
* Host:Fritz<br />
* Status: Confirmed<br />
* Title: <br />
* Abstract:<br />
<br />
'''5 May 2010'''<br />
* Speaker: David Zipser<br />
* Affiliation: UCB<br />
* Host: Daniel Little<br />
* Status: Tentative<br />
* Title: Brytes 2:<br />
* Abstract:<br />
<br />
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.<br />
<br />
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.<br />
<br />
'''12 May 2010'''<br />
* Speaker: Frank Werblin (Redwood group meeting - internal only)<br />
* Affiliation: Berkeley<br />
* Host: Bruno<br />
* Status: Tentative<br />
* Title: <br />
* Abstract:<br />
<br />
'''19 May 2010'''<br />
* Speaker: Anna Judith<br />
* Affiliation: UCB<br />
* Host: Daniel Little (Redwood Lab Meeting - internal only)<br />
* Status: confirmed<br />
* Title: <br />
* Abstract:</div>Giselyhttps://rctn.org/w/index.php?title=Mentorship_Resources&diff=7550Mentorship Resources2014-06-23T20:53:36Z<p>Gisely: </p>
<hr />
<div>Mentorship is one of the most concretely impactful ways to give back for all the opportunities that have been provided to us over the course of our educations. Additionally, mentorship may be one mechanism we can use to address some the diversity issues we face the Redwood Center. In particular, active recruitment of undergraduate mentees may be one way reach some students who would otherwise not even realize that working in computationally oriented group like ours is something approachable for them.<br />
<br />
Somethings we should work on:<br />
# Organizing a list of mentorship projects that are available at the Redwood.<br />
# Establishing a procedure for students to apply to mentorship opportunities with us.<br />
# Finding effective channels for active recruitment of talented mentees-- for example, we might consider attending meetings of undergraduate organizations whose interests overlap with the Redwood to let them know about opportunities available here.<br />
# Providing training for graduate students on how to be a successful mentor.<br />
<br />
What follows is the beginning of list of resources for mentorship and outreach at Berkeley. Please add to it!<br />
<br />
== Undergraduate Mentorship ==<br />
* The [http://research.berkeley.edu/urap Undergraduate Research Apprenticeship Program (URAP)] provides a central list of research projects that are available for undergraduates in different departments across campus. For undergraduates how stick with with a project during the fall and spring semesters there are some $2500 stipends available to continue research over the summer. Undergraduates apply for the Fall 2014 program starting on August 21-- we should consider getting some research opportunity postings up there.<br />
* The [http://http://surf.berkeley.edu/ Summer Undergraduate Research Fellowship (SURF)] provides funding for a small number of exceptional undergraduates to do research over the summer. Students apply at the end of February/beginning of March. If you have talented undergrad working with you make sure they know about this! <br />
* The NSF [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5517&org=NSF Research Experience for Undgraduates (REU)] program provides some funding for undergraduates to work on NSF funded research projects. This may be on mechanism to get funding for an undergraduate research assistant.<br />
* The [http://aavp.berkeley.edu/services/ugmp.html Undergraduate-Graduate Mentorship Program (UGMP)] pairs undergraduates with graduates students to get mentorship advice, for example, on preparing to apply for graduate school. Applications to participate in the spring program are due at the beginning of December.<br />
<br />
== Mentorship of younger students ==<br />
* The Summer Math and Science Honors (SMASH) [http://cdms.berkeley.edu/UCBlabs/Main/SMASH Topics in Current Science Research] course is an opportunity to lead a simple research project with small group of underprivleged high school students.</div>Giselyhttps://rctn.org/w/index.php?title=Mentorship_Resources&diff=7549Mentorship Resources2014-06-23T20:52:16Z<p>Gisely: /* Undergraduate Mentorship */</p>
<hr />
<div>Mentorship is one of the most concretely impactful ways to give back for all the opportunities that have been provided to us over the course of our educations. Additionally, mentorship may be one mechanism we can use to address some the diversity issues we face the Redwood Center. In particular, active recruitment of undergraduate mentees may be one way reach some students who would otherwise not even realize that working in computationally oriented group like ours is something approachable for them.<br />
<br />
Somethings we should work on:<br />
# Organizing list of mentorship projects we can make available at the Redwood.<br />
# Establishing a procedure for students to apply to mentorship opportunities with us.<br />
# Finding effective channels for active recruitment of talented mentees-- for example, we might consider attending meetings of undergraduate organizations whose interests overlap with the Redwood to let them know about opportunities available here.<br />
# Providing training for graduate students on how to be a successful mentor.<br />
<br />
What follows is the beginning of list of resources for mentorship and outreach at Berkeley. Please add to it!<br />
<br />
== Undergraduate Mentorship ==<br />
* The [http://research.berkeley.edu/urap Undergraduate Research Apprenticeship Program (URAP)] provides a central list of research projects that are available for undergraduates in different departments across campus. For undergraduates how stick with with a project during the fall and spring semesters there are some $2500 stipends available to continue research over the summer. Undergraduates apply for the Fall 2014 program starting on August 21-- we should consider getting some research opportunity postings up there.<br />
* The [http://http://surf.berkeley.edu/ Summer Undergraduate Research Fellowship (SURF)] provides funding for a small number of exceptional undergraduates to do research over the summer. Students apply at the end of February/beginning of March. If you have talented undergrad working with you make sure they know about this! <br />
* The NSF [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5517&org=NSF Research Experience for Undgraduates (REU)] program provides some funding for undergraduates to work on NSF funded research projects. This may be on mechanism to get funding for an undergraduate research assistant.<br />
* The [http://aavp.berkeley.edu/services/ugmp.html Undergraduate-Graduate Mentorship Program (UGMP)] pairs undergraduates with graduates students to get mentorship advice, for example, on preparing to apply for graduate school. Applications to participate in the spring program are due at the beginning of December.<br />
<br />
== Mentorship of younger students ==<br />
* The Summer Math and Science Honors (SMASH) [http://cdms.berkeley.edu/UCBlabs/Main/SMASH Topics in Current Science Research] course is an opportunity to lead a simple research project with small group of underprivleged high school students.</div>Gisely