-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
1133 lines (990 loc) · 80.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Pavan Ramkumar Research Portfolio</title>
<!-- Bootstrap Core CSS -->
<!--<link href="css/bootstrap.min.css" rel="stylesheet"> -->
<link href="css/bootstrap.css" rel="stylesheet">
<!-- Custom CSS -->
<link href="css/grayscale-agency.css" rel="stylesheet">
<!-- Favicon -->
<link href="figs/favicon-bar-chart-o.ico" rel="shortcut icon">
<!-- Custom Fonts -->
<link href="font-awesome/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet" type="text/css">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
<!-- D3JS scripts -->
<script type="text/javascript" src="./d3/d3.v3.js"></script>
<script src="//code.jquery.com/jquery-1.12.0.min.js"></script>
<link href="css/zoom.css" rel="stylesheet">
<script type="text/javascript" src="js/zoom.js"></script>
<script type="text/javascript" src="js/transition.js"></script>
<!-- Google Analytics Tracking -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-73654333-1', 'auto');
ga('send', 'pageview');
</script>
</head>
<body id="page-top" data-spy="scroll" data-target=".navbar-fixed-top">
<!-- Navigation -->
<nav class="navbar navbar-custom navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-main-collapse">
<i class="fa fa-bars"></i>
</button>
<a class="navbar-brand page-scroll" href="#page-top">
<i class="fa fa-play-circle"></i> <span class="light">Home</span>
</a>
</div>
<div class="navbar-header page-scroll">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse navbar-right navbar-main-collapse">
<ul class="nav navbar-nav">
<!-- Hidden li included to remove active class from about link when scrolled up past about section -->
<li class="hidden">
<a href="#page-top"></a>
</li>
<li>
<a class="page-scroll" href="#news">News</a>
</li>
<li>
<a class="page-scroll" href="#about">About</a>
</li>
<li>
<a class="page-scroll" href="#publications">Publications</a>
</li>
<li>
<a class="page-scroll" href="#portfolio">Projects</a>
</li>
<li>
<a class="page-scroll" href="#software">Software</a>
</li>
<li>
<a class="page-scroll" href="#collaborators">Collaborators</a>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container -->
</nav>
<!-- Intro Header -->
<header class="intro">
<div class="intro-body">
<div class="container">
<div class="row">
<div class="col-md-8 col-md-offset-2">
<p></p>
<p></p>
<p></p>
<h6 class="brand-heading">Building Marr's bridges</h6>
<p class="intro-text"><i>"To understand the relationship between behavior and the brain one has to begin by defining the function, or the computational goal, of a complete behavior. Only then can a neuroscientist determine how the brain achieves that goal"</i> — David Marr </p>
<a href="#news" class="btn btn-circle page-scroll">
<i class="fa fa-angle-double-down animated"></i>
</a>
</div>
</div>
</div>
</div>
</header>
<!-- News Section -->
<section id="news" class="container content-section text-justify">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Recent News</h2>
<!-- <h3 class="section-subheading text-muted"></h3> -->
</div>
</div>
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<p> <b>March 2017</b> Co-organizing the
<span class="journal">Neural Time Series Coding sprint</span>
at NYU Center for Data Science.
<a href="https://kingjr.github.io/supervised_time_series/">
<i class="fa fa-external-link"></i> [meeting]</a>
</p>
<p> <b>February 2017</b> Two posters and a talk at <span class="journal"> Cosyne 2017</span>. <br><br>
What can we learn about natural vision from color tuning curves? (<i>Poster</i>). <br>
Deep learning approaches towards generating neuronal morphology (<i>Poster, with Roozbeh Farhoodi</i>). <br>
Distinct eye movement strategies differentially reshape visual space (<i>Talk, with Daniel Wood</i>). <br>
</p>
<p> <b>December 2016</b> Poster at
<span class="journal">NIPS Workshop: Brains and Bits</span>.
<i> Deep learning for ventral vision</i>.
<a href="https://www.dropbox.com/s/7x3ds5bzcgj489s/Ramkumar_V4_NIPS2016_small.jpg?dl=0">
<i class="fa fa-file-text-o"></i> [poster]</a>
</p>
<p> <b>November 2016</b> Talk at the
<span class="journal">Undergraduate Neuroscience Seminar,
Loyola University of Chicago</span>.
<i> Computation: by neurons and for neuroscience</i>.
</p>
<p> <b>September 2016</b> Talk at the
<span class="journal">Motor Control Group,
Brain and Mind Institute, Western Ontario</span>.
<i> On the computational complexity of movement sequence chunking</i>.
</p>
<p> <b>August 2016</b> New paper in <span class="journal"> PLoS One </span>
on reward coding in the premotor and motor cortices.
<a href="http://journals.plos.org/plosone/article/asset?id=10.1371%2Fjournal.pone.0160851.PDF">
<i class="fa fa-file-text-o"></i> [paper]</a>
<a href="http://f1000.com/prime/726679245?bd=1">
<i class="fa fa-newspaper-o"></i> [F1000] </a>
</p>
<p> <b>August 2016</b> Talk at <span class="journal">PyData Chicago</span>,
Chicago edition of a global data science conference.
I will present a recently developed Python package
<span class="journal"> Pyglmnet</span>, an efficient implementation of
elastic-net regularized generalized linear models (GLMs).
<a href="http://pydata.org/chicago2016/schedule/presentation/15/">
<i class="fa fa-external-link"></i> [schedule]</a>
<a href="https://www.youtube.com/watch?v=zXec96KD1uA">
<i class="fa fa-video-camera"></i> [video]</a>
<a href="http://pavanramkumar.github.io/pydata-chicago-2016/index.html">
<i class="fa fa-th"></i> [slides]</a>
<a href="http://github.com/pavanramkumar/pydata-chicago-2016">
<i class="fa fa-github"></i> [tutorials]</a>
</p>
<p> <b>August 2016</b> <span class="journal">Deep Learning Summer School</span>,
4th edition of an annual week-long lecture series on Deep Learning
run by Yoshua Bengio and Alan Courville. ~25% acceptance rate.
<a href="https://sites.google.com/site/deeplearningsummerschool2016/">
<i class="fa fa-external-link"></i> [meeting]</a>
</p>
<p>
I'll present synthetic neurophysiology approaches to
characterize the functional properties of neurons in visual area V4.
We use convnets to predict spiking activity recorded from monkeys
freely viewing natural scenes and then show artificial stimuli to these model neurons.
<a href="https://www.dropbox.com/s/wmv4welwuq2so7l/Ramkumar_V4_DLSS2016.pdf?dl=0">
<i class="fa fa-file-text-o"></i> [poster]</a>
</p>
<p> <b>July 2016</b> New paper in <span class="journal"> eLife </span>
on neural correlates of motor plans to uncertain targets.
<a href="https://elifesciences.org/content/5/e14316">
<i class="fa fa-file-text-o"></i> [paper]</a>
<a href="https://elifesciences.org/content/5/e18721">
<i class="fa fa-newspaper-o"></i> [commentary]</a>
</p>
<p> <b>July 2016</b> New paper in <span class="journal"> Nature Communications </span>
on a theory of movement sequence chunking.
<a href="http://www.nature.com/ncomms/2016/160711/ncomms12176/pdf/ncomms12176.pdf">
<i class="fa fa-file-text-o"></i> [paper]</a>
</p>
<p> <b>June 2016</b> <span class="journal"> Spykes</span>,
a new Python package that makes standard spiking neural data and
tuning curve analysis easy and good-looking.
<a href="http://github.com/KordingLab/spykes">
<i class="fa fa-github"></i> [github]</a>
</p>
<p> <b>June 2016</b> New paper in <span class="journal"> Journal of Neurophysiology </span>
on disambiguating the role of frontal eye fields in spatial and feature attention
during natural scene search using generalized linear modeling of spike trains.
<a href="http://jn.physiology.org/content/early/2016/06/01/jn.01044.2015.abstract">
<i class="fa fa-file-text-o"></i> [paper]</a>
</p>
<p> <b>May 2016</b> New paper in <span class="journal"> Journal of Neurophysiology </span>
on expected reward modulation of FEF activity during natural scene search.
<a href="http://jn.physiology.org/content/early/2016/05/09/jn.00119.2016.abstract">
<i class="fa fa-file-text-o"></i> [paper]</a>
</p>
<p> <b>April 2016</b> <span class="journal"> Pyglmnet</span>,
a new Python package for elastic-net regularized generalized linear models!
<a href="http://github.com/glm-tools/pyglmnet">
<i class="fa fa-github"></i> [github]</a>
<a href="http://glm-tools.github.io/pyglmnet/">
<i class="fa fa-file-code-o"></i> [documentation]</a>
</p>
<p> <b>March 2016</b> New paper in <span class="journal"> Neuroimage </span>
on decoding natural scene category representations with MEG.
<a href="http://www.sciencedirect.com/science/article/pii/S1053811916002329">
<i class="fa fa-file-text-o"></i> [paper]</a>
</p>
<p> <b>February 2016</b> Two workshop talks at <span class="journal"> Cosyne 2016</span>:
<i> On the computational complexity of movement sequence chunking</i>, and
<i>The representation of uncertainty in the motor system</i>.
<a href="http://www.cosyne.org/c/index.php?title=Cosyne_16">
<i class="fa fa-external-link"></i> [meeting]</a> </p>
</p>
</div>
</div>
</section>
<!-- About Section -->
<section id="about" class="container content-section text-center">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<figure>
<img src="figs/Pavan.jpg" alt="Pavan" width=400 class="img-rounded" data-action="zoom">
<figcaption>Photo Credit: Titipat Achakulvisut</figcaption>
</figure>
<h3>Pavan Ramkumar</h3>
<ul class="list-inline banner-social-buttons">
<li>
<a href="mailto:[email protected]" class="btn btn-default btn-lg"><i class="fa fa-envelope-o fa-fw"></i> <span class="network-name">Email</span></a>
</li>
<li>
<a href="https://scholar.google.com/citations?user=JtltLUAAAAAJ&hl=en" class="btn btn-default btn-lg"><i class="fa fa-graduation-cap fa-fw"></i> <span class="network-name">Scholar</span></a>
</li>
<li>
<a href="http://pavanramkumar.github.io/pdfs/PavanRamkumar_CV_Oct_2016.pdf" class="btn btn-default btn-lg"><i class="fa fa-file-text-o fa-fw"></i> <span class="network-name">CV</span></a>
</li>
<li>
<a href="https://twitter.com/desipoika" class="btn btn-default btn-lg"><i class="fa fa-twitter fa-fw"></i> <span class="network-name">Twitter</span></a>
</li>
<li>
<a href="https://github.com/pavanramkumar" class="btn btn-default btn-lg"><i class="fa fa-github fa-fw"></i> <span class="network-name">Github</span></a>
</li>
</ul>
<p> Departments of Neurobiology and Physical Medicine & Rehabilitation, Northwestern University </p>
<p> Rehabilitation Institute of Chicago, 345 East Superior Street Chicago IL, 60611 Phone: (312) 608-7178 </p>
<p> Unprecedented advances in both experimental techniques to monitor brain
function, and computational infrastructure and algorithms over the past
decade, provide tremendous opportunities to reverse engineer the brain basis
of perception and behavior. To transition our efforts from a pre-Galilean age
of individualized discoveries into an era of integrated theory development,
we need to make two major computational advances. First, we need to develop
models of perception and behavior that generate testable neurobiological
predictions. Second, we need to analyze data from neuroscience experiments to
test these predictions. I work at the intersection of these two computational
endeavors. </p>
<p> A <i>computational lens</i> into brain function enables us to formalize perception
and behavior as the result of neural computations. This branch of my research
brings computational motor control to the brain basis of movement and
computer vision models to the brain basis of vision. <i>Computational tools</i>
in neuroscience enable us to make sense of large, heterogeneous and noisy
datasets. This branch of my research brings machine-learning techniques
and open source software development to neural data analysis. Specifically,
I collaborate with theorists, data scientists, and experimentalists in both
human neuroimaging and primate neurophysiology to study natural scene perception,
visual search, motor planning, and movement sequence learning. </p>
<p> The rate of technical advancement in neuroscience will result in an avalanche
of data; yet for theforeseeable future, our experiments will undersample both
the animal’s behavioral repertoire and the entire variability of its brain state.
This combination of data deluge and partial observability, makes the testing of
even the most neurobiologically grounded theories of brain function extremely
challenging. Advances in deep learning can contribute to both these problems.
Modern deep neural networks have as many neurons as a larval zebrafish. They can
already match human behavior in object recognition and visually guided reaching
movements. Importantly, unlike animal brains, deep neural networks with complex
behaviors are fully observable and controllable: we can record their state
throughout learning, modify weights, dropout neurons, or rewrite their loss
function. Thus, we are confronted with a choice to measure and perturb real
brains imprecisely or to measure and perturb deep network models of brain-like
behaviors precisely. As a sandbox for sharpening our theory, experiments and
data analysis tools, my research program will integrate this approach alongside
traditional computational neuroscience work to model and analyze a wide range
of behaviors in visual perception and motor control. </p>
</div>
</div>
</section>
<!-- Publications Section -->
<section id="publications" class="container content-section text-justify">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Publications</h2>
<!-- <h3 class="section-subheading text-muted"></h3> -->
</div>
</div>
<!-- List the publications -->
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<h3>In Preparation</h3>
<p>[17] <span class="author">Ramkumar P</span>, Turner RS, Körding KP. Optimization costs underlying movement sequence chunking in basal ganglia. </p>
<p>[16] <span class="author">Ramkumar P</span>, Fernandes HL, Smith MA, Körding KP. Hue tuning during active vision in natural scenes. </p>
<h3>In Review</h3>
<p>[15] Glaser J, Perich M, <span class="author">Ramkumar P</span>, Miller LE, Körding KP. Dorsal premotor cortex encodes ubiquitous probability distributions. </p>
<h3>2016</h3>
<p>[14] <span class="author">Ramkumar P</span>, Cooler S, Dekleva BM, Miller EL, Körding KP. Premotor and motor cortices encode reward. <span class="journal">PLoS One</span>, 11(8): e0160851.
<a href="http://journals.plos.org/plosone/article/asset?id=10.1371%2Fjournal.pone.0160851.PDF"><i class="fa fa-file-pdf-o"></i> [pdf] </a>
<a href="http://f1000.com/prime/726679245?bd=1"><i class="fa fa-newspaper-o"></i> [F1000] </a>
</p>
<p>[13] <span class="author">Ramkumar P*</span>, Lawlor PN*, Glaser JI, Wood DW, Segraves MA, Körding KP. Feature-based attention and spatial selection in frontal eye fields during natural scene search. <span class="journal">Journal of Neurophysiology</span>, EPub Ahead of Print.
<a href="http://jn.physiology.org/content/early/2016/06/01/jn.01044.2015"><i class="fa fa-external-link"></i> [paper]</a>
</p>
<p>[12] Glaser JI*, Wood DW*, Lawlor PN, <span class="author">Ramkumar P</span>, Körding KP, Segraves MA. Frontal eye field represents expected reward of saccades during natural scene search. <span class="journal">Journal of Neurophysiology</span>, EPub Ahead of Print.
<a href="http://pavanramkumar.github.io/pdfs/12-Ramkumar_etal_JNP_2016.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<p>[11] Dekleva BM, <span class="author">Ramkumar P</span>, Wanda PA, Körding KP, Miller LE. Uncertainty leads to persistent effects on reach representations in dorsal premotor cortex. <span class="journal">eLife</span>, 5:e14316.
<a href="https://elifesciences.org/content/5/e14316-download.pdf"><i class="fa fa-file-pdf-o"></i> [pdf] </a>
<a href="https://elifesciences.org/content/5/e18721"><i class="fa fa-newspaper-o"></i> [commentary] </a>
</p>
<p>[10] <span class="author">Ramkumar P</span>, Acuna DE, Berniker M, Grafton S, Turner RS, Körding KP. Chunking as the result of an efficiency–computation tradeoff. <span class="journal">Nature Communications</span>, 7:12176.
<a href="http://www.nature.com/ncomms/2016/160711/ncomms12176/pdf/ncomms12176.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<p>[9] <span class="author">Ramkumar P</span>, Hansen BC, Pannasch S, Loschky LC. Visual information representation and natural scene categorization are simultaneous across cortex: An MEG study. <span class="journal">Neuroimage</span>, 134:295–304.
<a href="http://pavanramkumar.github.io/pdfs/09-Ramkumar_etal_NIMG_2016.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<h3>2015</h3>
<p>[8] <span class="author">Ramkumar P</span>, Fernandes HL, Körding KP, Segraves MA. 2015. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search. <span class="journal">Journal of Vision</span>, 15(3):19.
<a href="http://pavanramkumar.github.io/pdfs/08-Ramkumar_etal_JVis_2015.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<h3>2014</h3>
<p>[7] <span class="author">Ramkumar P</span>, Parkkonen L, Hyvärinen A. 2014. Group-level spatial independent component analysis of Fourier envelopes of resting-state MEG data. <span class="journal">Neuroimage</span>, 86:480–491.
<a href="http://pavanramkumar.github.io/pdfs/07-Ramkumar_etal_Neuroimage_2014.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<h3>2013</h3>
<p>[6] <span class="author">Ramkumar P</span>, Jas M, Pannasch S, Parkkonen L, Hari R. 2013. Feature-specific information processing precedes concerted activation in human visual cortex. <span class="journal">Journal of Neuroscience</span>, 33: 7691–7699.
<a href="http://pavanramkumar.github.io/pdfs/06-Ramkumar_etal_JNeurosci_2013.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<p>[5] Hyvärinen A, <span class="author">Ramkumar P</span>. 2013. Testing independent component patterns by inter-subject or inter-session consistency. <span class="journal">Frontiers in Human Neuroscience</span>, 7 (94).
<a href="http://pavanramkumar.github.io/pdfs/05-Hyvarinen_Ramkumar_Frontiers_2013.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<h3>2012</h3>
<p>[4] <span class="author">Ramkumar P</span>, Parkkonen L, Hari R, Hyvärinen A. 2012. Characterization of neuromagnetic brain rhythms over time scales of minutes using spatial independent component analysis. <span class="journal">Human Brain Mapping</span>, 33: 1648–1662.
<a href="http://pavanramkumar.github.io/pdfs/04-Ramkumar_etal_HBM_2012.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<h3>2010</h3>
<p>[3] Hyvärinen A, <span class="author">Ramkumar P</span>, Parkkonen L, Hari R. 2010. Independent component analysis of short-time Fourier transforms for spontaneous EEG/MEG analysis. <span class="journal">Neuroimage</span>, 49: 257–271.
<a href="http://pavanramkumar.github.io/pdfs/03-Hyvarinen_etal_Neuroimage_2010.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<p>[2] <span class="author">Ramkumar P</span>, Parkkonen L, Hari R. 2010. Oscillatory Response Function: Towards a parametric model of rhythmic brain activity. <span class="journal">Human Brain Mapping</span>, 31: 820–834.
<a href="http://pavanramkumar.github.io/pdfs/02-Ramkumar_etal_HBM_2010.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
<p>[1] Malinen S, Vartiainen N, Hlushchuk Y, Koskinen M, <span class="author">Ramkumar P</span>, Forss N, Kalso E, Hari R. 2010. Aberrant spatiotemporal resting-state brain activation in patients with chronic pain. <span class="journal">Proceedings of the National Academy of Sciences USA</span>, 107: 6493–6497.
<a href="http://pavanramkumar.github.io/pdfs/01-Malinen_etal_PNAS_2010&suppl.pdf"><i class="fa fa-file-pdf-o"></i> [pdf]</a>
</p>
</div>
</div>
</section>
<!-- Projects Section -->
<section id="portfolio" class="bg-light-gray">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Projects</h2>
<!-- <h3 class="section-subheading text-muted">Lorem ipsum dolor sit amet consectetur.</h3> -->
</div>
</div>
<div class="row">
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal8" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/08-PINS.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Natural scene psychophysics</h4>
<!-- <p class="text-muted">convnets</p> -->
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal7" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/07-CNN.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Synthetic neurophysiology</h4>
<!-- <p class="text-muted">V4 and convnets</p> -->
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal1" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/01-FEF.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Visual search in natural scenes</h4>
<p class="text-muted">Frontal eye fields</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal2" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/02-MEG.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Rapid scene categorization</h4>
<p class="text-muted">Whole-scalp magnetoencephalography</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal3" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/03-Chunk.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Movement chunking</h4>
<p class="text-muted">Basal ganglia</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal4" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/04-Unc.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Sensorimotor uncertainty</h4>
<p class="text-muted">Premotor cortex</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal5" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/05-Reward.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Reward</h4>
<p class="text-muted">Premotor and motor cortices</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal6" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-reorder fa-3x"></i>
</div>
</div>
<img src="badges/06-V4.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Color perception in natural scenes</h4>
<p class="text-muted">Area V4</p>
</div>
</div>
</div>
</div>
</section>
<!-- Software Section -->
<section id="software" class="container content-section text-justify">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Software</h2>
<!-- <h3 class="section-subheading text-muted"></h3> -->
</div>
</div>
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<h3> Pyglmnet: A Python package for elastic-net regularized generalized linear models </h3>
<ul class="list-inline banner-social-buttons">
<li>
<a href="https://github.com/glm-tools/pyglmnet" class="btn btn-default btn-lg"><i class="fa fa-github fa-fw"></i> <span class="network-name">Github</span></a>
</li>
<li>
<a href="http://glm-tools.github.io/pyglmnet" class="btn btn-default btn-lg"><i class="fa fa-file-code-o fa-fw"></i> <span class="network-name">Documentation</span></a>
</li>
<li>
<a href="https://github.com/pavanramkumar/pydata-chicago-2016" class="btn btn-default btn-lg"><i class="fa fa-github fa-fw"></i> <span class="network-name">Tutorials</span></a>
</li>
</ul>
<p><a href="https://en.wikipedia.org/wiki/Generalized_linear_model">
Generalized linear models (GLMs)</a> are powerful tools for
multivariate regression. They allow us to model different types
of target variables: real, categorical, counts, ordinal, etc.
using multiple predictors or features. In the era of big data,
and high performance computing, GLMs have come to be widely applied
across the sciences, economics, business, and finance. </p>
<p>In the era of exploratory data analyses with a large number of
predictor variables, it is important to regularize. Regularization
prevents overfitting by penalizing the negative log likelihood and
can be used to articulate prior knowledge about the parameters
in a structured form. </p>
<p>Despite the attractiveness of regularized GLMs, the available
tools in the Python data science eco-system are highly fragmented.
More specifically,
<a href="http://statsmodels.sourceforge.net/devel/glm.html">statsmodels</a>
provides a wide range of link functions but no regularization.
<a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html">scikit-learn</a>
provides elastic net regularization but only for linear models.
<a href="https://github.com/scikit-learn-contrib/lightning">lightning</a>
provides elastic net and group lasso regularization,
but only for linear and logistic regression.</p>
<p>Pyglmnet is a response to this fragmentation. </p>
</div>
</div>
<!-- ################################################################ -->
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<h3> Spykes: A Python package for spike data analysis and visualization </h3>
<ul class="list-inline banner-social-buttons">
<li>
<a href="https://github.com/KordingLab/spykes" class="btn btn-default btn-lg"><i class="fa fa-github fa-fw"></i> <span class="network-name">Github</span></a>
</li>
</ul>
<p>Almost any electrophysiology study of awake behaving animals
relies on a battery of standard analyses.
<p>Raster plots and peri-stimulus time histograms aligned to
stimuli and behavior provide a snapshot visual description of
neural activity. Similarly, tuning curves are the most standard
way to characterize how neurons encode stimuli or behavioral
preferences. With increasing popularity of population recordings,
maximum-likelihood decoders based on tuning models are becoming
part of this standard. </p>
<p>Yet, virtually every lab relies on a set of in-house analysis
scripts to go from raw data to summaries. We want to change this
with Spykes, a collection of Python tools to make visualization
and analysis of spiking neural data easy and reproducible. </p>
</div>
</div>
</section>
<!-- Collaborators section -->
<section id="collaborators" class="container content-section text-justify">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Collaborators</h2>
<h3 class="section-subheading text-muted">Graph visualization of collaborators and projects</h3>
</div>
</div>
<div class="row">
<div class="col-sm-8 col-sm-offset-2">
<p>
Below is a visualization of my publications and ongoing projects embedded in my network of 28 co-authors using the <a href="https://github.com/mbostock/d3/wiki/Force-Layout">force-directed graph</a> schema. Cool-colored nodes represent authors and warm-colored nodes represent projects. Links represent authors that have collaborated together on a project. Nodes representing co-authors are sized according to the number of papers or projects I have in common with them. If you're viewing this on a desktop, moving the mouse over a node gives you the name of the author or the title of the publication or project that it represents.
<table style="width:100%">
<tr>
<td><span style="color:#07b849"><i class="fa fa-circle"></i></span></td>
<td>Me</td>
<td><span style="color:#9e9225"><i class="fa fa-circle"></i></span></td>
<td>Accepted/ Published Articles</td>
</tr>
<tr>
<td><span style="color:#3b5998"><i class="fa fa-circle"></i></span></td>
<td>PI co-authors</td>
<td><span style="color:#d2c43d"><i class="fa fa-circle"></i></span></td>
<td>Articles Under Review/ Revision</td>
</tr>
<tr>
<td><span style="color:#88bee4"><i class="fa fa-circle"></i></span></td>
<td>Grad student or postdoc co-authors</td>
<td><span style="color:#d27a3d"><i class="fa fa-circle"></i></span></td>
<td>Ongoing Projects</td>
</tr>
</table>
</p>
</div>
</div>
<!-- Add d3js visualization -->
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<script>
var width = 1200,
height = 500;
var color = d3.scale.category20();
var force = d3.layout.force()
.charge(-150)
.linkDistance(70)
.gravity(0.05)
.size([width, height]);
var svg = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height);
//.attr("class", 'col-sm-8 col-sm-offset-2');
d3.json("./d3/projects_pavan_nocomments.json", function(error, graph) {
force
.nodes(graph.nodes)
.links(graph.links)
.start();
var link = svg.selectAll("line.link")
.data(graph.links)
.enter().append("line")
.attr("class", "link")
.style("stroke-width", function(d) { return Math.sqrt(d.value); });
var node = svg.selectAll("circle.node")
.data(graph.nodes)
.enter().append("circle")
.attr("class", "node")
.attr("r", function(d) { return d.size; })
.style("fill", function(d) { return d.color; })
.call(force.drag);
node.append("title")
.text(function(d) { return d.name; });
force.on("tick", function() {
link.attr("x1", function(d) { return d.source.x; })
.attr("y1", function(d) { return d.source.y; })
.attr("x2", function(d) { return d.target.x; })
.attr("y2", function(d) { return d.target.y; });
node.attr("cx", function(d) { return d.x; })
.attr("cy", function(d) { return d.y; });
});
});
</script>
</div>
</div>
</section>
<!-- Map Section
<div id="map"></div> -->
<!-- Footer -->
<footer>
<div class="container text-center">
<p>© Pavan Ramkumar 2016</p>
</div>
</footer>
<!-- Portfolio Modal 1 -->
<div class="portfolio-modal modal fade" id="portfolioModal1" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2>Visual search and FEF</h2>
<h4><span style="color: #3b5998">With: Hugo Fernandes, Pat Lawlor, Josh Glaser, Daniel Wood, Mark Segraves, Konrad Körding</span></h4>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
<p>
To bring objects of interest in the visual environment into focus, we shift our gaze up to three times a second.
Deciding where to look next among a large number of alternatives is thus one of the most frequent decisions we make.
Why do we look where we look?
</p>
<p>Studies have shown that both bottom-up (e.g. luminance contrast, saliency, edge–energy) and top-down factors (such as target similarity or relevance) influence the guidance of eye movements.
Predictive models of eye movements are derived from priority maps composed of one or more of these factors.
Evidence for such maps have been reported in the lateral intra parietal (LIP) cortex, the frontal eye field (FEF), primary visual cortex (V1) and/or ventral visual area V4, but computational maps of priority have not been used to model neural activity.
</p>
<p>In this project, we attempt to unify computational models and neurophysiology of gaze priority maps.
We developed primate models of gaze behavior in natural scenes by rewarding monkeys to find targets embedded in scenes.
We build predictive models of gaze using computational definitions of visual priority and quantify model predictions on monkeys' fixation choices.
</p>
<figure>
<img src="figs/FEF_Figure01.png" alt="FEF" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 1</b>. Prediction of gaze using visual features at fixation. We compared bottom-up saliency, top-down relevance and edge-energy at fixated (above: left panel) and non-fixated, i.e. shuffled control (above: right panel) targets by computing the area under the ROC curves (below). The star indicates statistically significant difference from a chance level of 0.5.
</figcaption>
</figure>
<p> To ask if FEF neurons represent computational descriptions of priority including saliency, relevance and energy, we build generalized linear models (GLMs) of Poisson-spiking neurons (see Fig. 2). </p>
<figure>
<img src="figs/FEF_Figure02.png" alt="FEF" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 2</b>. A comprehensive generative model of neural spikes using a GLM framework. The model comprises visual features: saliency, relevance and energy from a neighborhood around fixation location after the saccade, un-tuned responses aligned to saccade and fixation onsets, and the direction of the saccade. The features are passed through parameterized spatial filters (representing the receptive field) and temporal filters. The model also comprises spike history terms (or self terms). All these features are linearly combined followed by an exponential nonlinearity, which gives the conditional intensity function of spike rate, given model parameters. Spikes are generated from this model by sampling from a Poisson distribution with mean equal to the conditional intensity function. Brown: basis functions modeling temporal activity around the saccade onset; Green: basis functions modeling temporal responses around the fixation onset; Blue: basis functions modeling temporal responses after spike onset.
</figcaption>
</figure>
<p> We find that the majority of variance in FEF firing rates are explained by the direction of upcoming movement.
However, we find firing rate enhancement for saccades to targets within the receptive field, suggesting that FEF neurons encode expected reward during search.
</p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 2 -->
<div class="portfolio-modal modal fade" id="portfolioModal2" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Rapid scene categorization and MEG</h2>
<h4><span style="color: #3b5998">With: Sebastian Pannasch, Bruce Hansen, Lester Loschky</span></h4>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
<p>To make effective decisions in our environment, our brains must be able to effectively recognize and comprehend real-world scenes. Human are remarkable at recognizing scene categories from the briefest of glimpses (< 20 ms). The holistic information that can be extracted in such short durations has come to be known as <b><i>scene gist</i></b>.</p>
<p>Computational models of scene gist, such as the spatial envelope (SpEn) model, as well as behavioral studies provide useful suggestions for what visual features the brain might use to categorize scenes. However, they do not inform us about when and where in the brain such information is represented and how scene-categorical judgments are made on the basis of these representations.</p>
<figure>
<img src="figs/MEGFigure01.png" alt="MEG" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 3</b>. Means and standard errors of median cross-validated relative R2s for each region of interest (ROI). Relative R2s are a measure of the extent to which unique variance in the neural decoder’s pattern of errors can be explained by behavioral confusion matrices (red) or SpEn confusion matrices (blue). Red (blue) bands at the bottom of each trace indicate the time durations for which unique variance in neural decoder-based errors explained by behavioral errors exceed (fall below) image decoder-based errors. In the two time series plots for each ROI, the left and right plots represent data from the left and right hemispheres respectively. The legends show the ROIs on the lateral (above) and medial (below) surfaces of the left hemisphere.
</figcaption>
</figure>
<p>Here, we investigate the brain-behavior relationship underlying rapid scene categorization. We use whole-scalp magnetoencephalography (MEG) to track visual scene information flow in the ventral and temporal cortex, using spatially and temporally resolved maps of decoding accuracy. To investigate the time course of visual representation versus behavioral category judgment, we then use neural decoders in concert with decoders based on SpEn features to study errors in behavioral categorization (Fig. 3). Using confusion matrices, we track how well patterns of errors in neural decoders are be explained by SpEn decoders and behavioral errors. We find that both SpEn decoders and behavioral errors explain unique variance throughout the ventrotemporal cortex, and that their effects are temporally simultaneous and restricted to 100-250 ms after stimulus onset. Thus, during rapid scene categorization, neural processes that ultimately result in behavioral categorization are simultaneous and colocalized with neural processes underlying visual information representation. </p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 3 -->
<div class="portfolio-modal modal fade" id="portfolioModal3" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2> Movement chunking and basal ganglia</h2>
<h4><span style="color: #3b5998">With: Daniel Acuna, Max Berniker, Rob Turner, Scott Grafton, Konrad Körding</span></h4>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
<p>We routinely execute complex movement sequences with such effortless ease that the computational complexity of planning them optimally is often under-appreciated. When movements are learned from external cues, they start out highly regular, progressively become more varied until the become habitual and more regular again (Fig. 4). </p>
<figure>
<img src="figs/Chunking_Figure01.png" alt="Chunking" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 4</b>. Movements become more regular with learning (a) Reaching task. Monkeys move a cursor through 5 out-and-back reaches (10 elemental movements) between central and peripheral targets. White-filled circular cues indicate which target to capture. Each successful element is rewarded. (b) Hand trajectories; left: position, right: speed. Each trial is stretched to a duration of 5 s. Gray traces indicate single trials and bold colored traces indicate mean traces. The colored envelopes around the mean trace indicate one standard deviation on either side of the mean.
</figcaption>
</figure>
<p> A common facet of many such complex movements is that they tend to be discrete nature, i.e. they are often executed as chunks (Fig. 5a). Here, we framed movement chunking as the result of a trade-off between the desire to make efficient movements and minimize the computational complexity of optimizing them (Fig. 5b). We show that monkeys adopt a cost-effective strategy to deal with this tradeoff. By modeling chunks as minimum-jerk trajectories (Fig. 5c), we found that kinematic sequences are best described as progressively resembling locally optimal trajectories, with optimization occurring within chunks (Fig. 5d). Thus, the cumulative optimization costs are kept in check over the course of learning.</p>
<figure>
<img src="figs/Chunking_Figure02.png" alt="Chunking" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 5</b>. Modeling chunks as locally optimal-control trajectories. (a) Illustration of canonical halt and via models; green: via points, red: halt points. (b) Computing the trade off between efficiency and the complexity of being efficient. Each gray dot represents one potential chunk structure plotted against its maximally achievable efficiency under the model, and the corresponding computational complexity. The red curve is the convex hull of these points, and represents the Pareto frontier of the efficiency–computation tradeoff curve (c) Kinematics (black) and minimum jerk model (blue). Left: Trajectories become more looped as monkeys optimize over longer horizons. Middle: Speed traces. Initially trajectory optimization appears to happen over several chunks. Later in learning, a smaller number of chunks reveal increasingly efficient movements. Right: The squared jerk of the kinematic data and the model suggest that the behavior approaches the efficiency of the minimum jerk model after learning. (d) Goodness of fit (Pearson’s correlation coefficient) between the speed profiles of the minimum jerk model and the kinematic data (mean ± 2 SEMs) across days of learning.
</figcaption>
</figure>
<p> To study the underlying neural basis of these tradeoffs, we also record from globus pallidus (GP) — motor output structures in the basal ganglia — implicated in habit learning and movement sequencing. Our hypothesis is that neurons encode complexity of a movement, and that this encoding depends on whether the movement is novel or habitual. To test this hypothesis, we train monkeys to execute habitual overlearned (OL) sequences comprising consecutive center–out–and–back reaches always in the same order, as well as novel random (RN) sequences comprising a set of four cued reaches whose directions varied from trial to trial. For each neuron and sequence class, we fit Poisson generalized linear models (GLMs) to account for firing rate variability as a function of multiple regressors: (1) the cue type, (2) the order of the movement in the sequence, (3) the upcoming reach direction, (5) the instantaneous hand kinematics, (6) the computational complexity of executing the rest of the sequence as a single chunk relative to the complexity of executing each movement as a separate chunk, (7) the efficiency of the upcoming movement, defined as the negative squared jerk, (7) a reward event at the end of each movement, and (8) a trial-end regressor. </p>
<figure>
<img src="figs/Chunking_Figure03.png" alt="Chunking" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 6</b>. (a) GLM fits and partial model predictions for each of 4 elements (L to R) for an example neuron encoding cost and order of the movement in the sequence. The black trace shows PSTH aligned to cue onset; and colored traces in each panel show respective partial model predictions for that covariate. Shaded error bars show SEMs (b) Comparison of the number of neurons (as a % of all neurons) significantly modulated by each covariate, between OL and RN sequence classes. Positive numbers for a given covariate indicate that more neurons encode it in the OL condition. Neurons significantly encode a given covariate if a model that leaves out the covariate does significantly worse than the full model (99% CIs of cross-validated relative pseudo-R2s). Error bars are estimated by bootstrapping over 1000 repeats. All traces are computed from a held out test set.
</figcaption>
</figure>
<p> We found that GP encoded the order of the movement in the context of the sequence, as well as the relative complexity and efficiency of movements (see example neuron in Fig. 6a). When we compared the habitual (OL) and the novel (RN) sequences, a significantly larger proportion of GP neurons encoded the order of the movement, but a significantly smaller proportion encoded the relative complexity (Fig. 6b). The complexity of optimal control is thus an important driver of habit learning, both behaviorally and neurally. </p>
<p> Over and above the novel role of GP neurons in indexing computational complexity, we also found that a significantly larger proportion of GP neurons encode instantaneous arm kinematics during the habitual relative to the novel condition. Furthermore, a significantly smaller proportion of GP neurons encoded the cue and the reward during the habitual relative to the novel condition, suggesting that as movements become habituated with learning, GP firing rates are less modulated by response to an external movement cue or an external reward. Our findings suggest a dynamic functional role for GP — starting with bracketing movement sequences for goal-directed movements, and transitioning to a more direct control of movements once they are habituated. </p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 4 -->
<div class="portfolio-modal modal fade" id="portfolioModal4" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2 class="section-heading">Sensorimotor uncertainty and PMd</h3>
<h4><span style="color: #3b5998">With: Brian Dekleva, Paul Wanda, Lee Miller, Konrad Körding</span></h4>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
<p>Movements in the real world are often planned with uncertain information about where to move. Understanding the role of uncertainty in movement plans is important to improve rehabilitation therapies and brain-based prostheses. Bayesian estimation theory, which combines sources of information in proportion to their uncertainty, predicts movement behavior when uncertainty about subjective beliefs (priors) and sensory observations (likelihoods) is varied. By manipulating sensory uncertainty of the target in each trial of a reaching task, as well as the prior uncertainty of the target location during the task, we show that monkeys' reaches can be predicted using a Bayesian model that weights the different sources of uncertainty appropriately (Fig. 7). </p>
<figure>
<img src="figs/Uncertainty_Figure01.jpg" alt="Uncertainty" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 7</b>. Left: Before a movement cue, the monkey was shown 10 dots from one of two circular Gaussian likelihood distributions (κ = 5 or 50). In each trial, the angular position of the target was drawn from a prior circular Gaussian distribution with one of two prior variances (κ = 5 or 50). The legend shows narrow and broad prior conditions in lower and upper case, respectively. Target locations and movement trajectories for individual trials are shown in respective colors. Right: The actual reach angle is plotted against likelihood mean angle. If monkeys plan their reach by integrating prior and likelihood information in a probabilistic manner, the slope of this line describes the extent to which the likelihood information influences reaching behavior (1). Bayes-optimal and observed slopes (bootstrapped 95% CIs) are shown on the top/bottom and left/right of the legend, respectively. The monkey weighs prior and likelihood meaningfully although it trusts likelihood information more than it should if it were strictly Bayes-optimal.
</figcaption>
</figure>
<p> To perform probabilistic inference of this nature, the brain must represent/reflect uncertainty. We asked how sensory uncertainty is represented in population activity in dorsal premotor cortex (PMd) and primary motor cortex (M1) during movement preparation. We found that greater sensory uncertainty led to increased firing rates and broad recruitment of neurons in PMd but not M1 (Fig. 8). This broad recruitment suggests that multiple movement plans are represented in PMd activity until uncertainty-reducing feedback about the target location is received. </p>
<figure>
<img src="figs/Uncertainty_Figure02.png" alt="Uncertainty" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 8</b>. Spatio-temporal activity profiles in PMd and M1. Activity is plotted for all neurons, ranked by the distance between preferred direction and reach direction. (a) In PMd, uncertainty led to increased baseline firing rates and broader recruitment. (b) There was narrower recruitment under zero-uncertainty conditions in M1, but no differences between low and high uncertainty conditions.
</figcaption>
</figure>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 5 -->
<div class="portfolio-modal modal fade" id="portfolioModal5" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2 class="section-heading">Reward in premotor and motor cortices</h3>
<h4><span style="color: #3b5998">With: Brian Dekleva, Sam Cooler, Lee Miller, Konrad Körding</span></h4>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
<p>Reward is an important feedback signal for motor learning. How reward information reaches the motor cortex to influence movement planning and execution is unknown. Therefore, we asked whether premotor and motor cortices encode reward. We found a strong and robust encoding of reward outcome in premotor (PMd) and motor (M1) cortices. In particular, neurons increased their firing rate following trials that were not rewarded. We further investigated the nature of this signal and established that it is unlike any previously reported reward signal in the brain. It is unrelated to reward magnitude expectation or prediction error, it is not influenced by history of reward, and it is not modulated by error magnitude. Using generalized linear modeling of spikes we also carefully verified that the signal is not explained away by differences in kinematics or return reach planning activity. Thus, we found a categorical reward signal in PMd and M1 signaling the presence or absence of reward at the end of a goal-directed task.</p>
<figure>
<img src="figs/Reward_Figure01.png" alt="Reward" width=750 class="img-responsive" data-action="zoom">
<figcaption>
<b>Fig. 9</b>. Generalized linear modeling of reward coding. Model predictions for two example neurons are shown. Left: PSTHs for rewarded (blue) and unrewarded (red) trial subsets are shown for the test set, along with corresponding single-trial rasters for both data and model predictions on the test set. Right: Component predictions corresponding to each covariate.
</figcaption>
</figure>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 6 -->
<div class="portfolio-modal modal fade" id="portfolioModal6" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2 class="section-heading">V4 color tuning in natural scenes</h3>
<h4><span style="color: #3b5998">With: Hugo Fernandes, Matt Smith, Konrad Körding</span></h4>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-backward"></i> Back</button>
<p>How does the subjective experience of color arise from oscillations in the visible electromagnetic spectrum? We know that color perception begins in the L, M and S cone cells in the retina, which have broad preferred sensitivities to greenish-yellowish, greenish, and bluish frequencies in the electromagnetic spectrum. Following this stage, tuning to opponency — red-green, blue-yellow, light-dark — begins to emerge in the retina, the lateral geniculate nucleus (LGN), and the primary visual cortex (V1) where specialized color cells in V1 are clustered into "blobs". </p>
<p> The ventral visual area V4 is the first stage at which tuning to hue — subjectively experienced colors — begins to emerge. However, most experiments measuring hue tuning have been done with highly controlled artificial stimuli such as bars or gratings carefully optimized for preferred orientation and spatial receptive fields. Therefore, not much is known about the neural representation of perceived hue in naturalistic conditions. </p>
<figure>