VS265: Reading: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
No edit summary
Line 22: Line 22:
* '''HKP''' chapter 5, '''DJCM''' chapters 38-40, 44, '''DA''' chapter 8 (sec. 4-6)
* '''HKP''' chapter 5, '''DJCM''' chapters 38-40, 44, '''DA''' chapter 8 (sec. 4-6)
* [http://redwood.berkeley.edu/vs265/linear-neuron/linear-neuron-models.html Linear neuron models]
* [http://redwood.berkeley.edu/vs265/linear-neuron/linear-neuron-models.html Linear neuron models]
* [http://redwood.berkeley.edu/vs265/superlearn_handout1.pdf Handout] on supervised learning in single-stage feedforward networks
Background on linear algebra:
Background on linear algebra:
* [http://redwood.berkeley.edu/vs265/linear-algebra/linear-algebra.html Linear algebra primer]
* [http://redwood.berkeley.edu/vs265/linear-algebra/linear-algebra.html Linear algebra primer]
Line 31: Line 30:
* Rhodes P (1999) [http://redwood.berkeley.edu/vs265/Rhodes-review.pdf Functional Implications of Active Currents in the Dendrites of Pyramidal Neurons]
* Rhodes P (1999) [http://redwood.berkeley.edu/vs265/Rhodes-review.pdf Functional Implications of Active Currents in the Dendrites of Pyramidal Neurons]
* Schiller J (2003) [http://redwood.berkeley.edu/vs265/Schiller-spikes-dendrites.pdf  Submillisecond Precision of the Input–Output Transformation Function Mediated by Fast Sodium Dendritic Spikes in Basal Dendrites of CA1 Pyramidal Neurons]
* Schiller J (2003) [http://redwood.berkeley.edu/vs265/Schiller-spikes-dendrites.pdf  Submillisecond Precision of the Input–Output Transformation Function Mediated by Fast Sodium Dendritic Spikes in Basal Dendrites of CA1 Pyramidal Neurons]
==== Sept. 16 ====
* [http://redwood.berkeley.edu/vs265/superlearn_handout1.pdf Handout] on supervised learning in single-stage feedforward networks
* [http://redwood.berkeley.edu/vs265/superlearn_handout2.pdf Handout] on supervised learning in multi-layer feedforward networks - "back propagation"
Further reading:
* Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) [http://redwood.berkeley.edu/vs265/lecun-98b.pdf  "Efficient BackProp,"]  in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.).
* [http://cnl.salk.edu/Research/ParallelNetsPronounce/ NetTalk demo]





Revision as of 03:42, 16 September 2014

Aug 28: Introduction

Optional:

Sept 2: Neuron models

Background reading on dynamics, linear time-invariant systems and convolution, and differential equations:

Sept 4: Linear neuron, Perceptron

Background on linear algebra:

Sept 11: Multicompartment models, dendritic integration

Sept. 16

  • Handout on supervised learning in single-stage feedforward networks
  • Handout on supervised learning in multi-layer feedforward networks - "back propagation"

Further reading: