Saturday, August 11, 2018

Joining the editorial board of PLOS ONE

I have joined the Editorial Board of PLOS ONE. There are a few things about PLOS ONE that particularly appeal to me:
  • Broad scope is great for interdisciplinary research. My own research is primarily driven by experimental psychology, neuroscience, and computer science, as well as linguistics and neuropsychology/neurology. Before writing a manuscript, I often have to decide whether I will be submitting it to a cognitive psychology journal or a clinically-oriented (neuropsychology or neurology) journal or a neuroscience journal. This decision is not always easy and it has a major impact on how the manuscript needs to be written and who will review it. Since the scope of PLOS ONE covers the full range of natural and social sciences as well as medical research, I (you) don't need to worry about that. Just clearly describe the motivation, methods, results, and conclusions of the study and trust that Editors like me will find appropriate reviewers.
  • Accepts various article types. In addition to standard research articles, PLOS ONE accepts systematic reviews, methods papers (including descriptions of software, databases, and other tools), qualitative research, and negative results. If your manuscript is reporting original research, then it is a viable submission.
  • Publication decisions based on scientific rigor, not perceived impact (see full Criteria for Publication). It is difficult to try to guess what kind of impact a paper will have on the field and unnecessary because the field can figure that out on its own. As a reviewer, I focus on scientific rigor and whether the methods and results align with the motivation and conclusion. It's nice that PLOS ONE has the same focus. This emphasis on technical and ethical standards also means that PLOS ONE can publish good replication studies and negative results, which is critical for reducing publication bias and moving our field forward.
  • Fast decision times. Editors are expected to make decisions within a few days and reviewers are asked to complete their reviews in 10 days. Of course, this is no guarantee that a manuscript will have a fast decision -- it can take a long time to find reviewers and reviewers do not always meet their deadlines. But I think giving reviewers 10 days instead of 4-6 weeks (typical for psychology journals) and expecting editors to make fast decisions is a step in the right direction.
  • Open access at reasonable cost. This is not the place to discuss the relative merits of the standard reader-pay publication model and the open access author-pay model used by PLOS ONE. Suffice it to say that I like the open access model and I appreciate that PLOS ONE is doing it at a cost ($1595 USD) that is on the low end compared to other established open access journals.

Monday, April 16, 2018

Correcting for multiple comparisons in lesion-symptom mapping

We recently wrote a paper about correcting for multiple comparisons in voxel-based lesion-symptom mapping (Mirman et al., in press). Two methods did not perform very well: (1) setting a minimum cluster size based on permutations produced too much spillover beyond the true region, (2) false discovery rate (FDR) correction produced anti-conservative results for smaller sample sizes (N = 30–60). We developed an alternative solution by generalizing the standard permutation-based family-wise error correction approach, which provides a principled way to balance false positives and false negatives. 

For that paper, we focused on standard "mass univariate" VLSM where the multiple comparisons are a clear problem. The multiple comparisons problem plays out differently in multivariate lesion-symptom mapping methods such as support vector regression LSM (SVR-LSM; Zhang et al., 2014, a slightly updated version is available from our github repo). Multivariate LSM methods consider all voxels simultaneously and there is not a simple relationship between voxel-level test statistics and p-values. In SVR-LSM, the voxel-level statistic is a SVR beta value and the p-values for those betas are calculated by permutation. I've been trying to work out how to deal with multiple comparisons in SVR-LSM.

Friday, March 23, 2018

Growth curve analysis workshop slides

Earlier this month I taught a two-day workshop on growth curve analysis at Georg-Elias-Müller Institute for Psychology in Göttingen, Germany. The purpose of the workshop was to provide a hands-on introduction to using GCA to analyze longitudinal or time course data, with a particular focus on eye-tracking data. All of the materials for the workshop are now available online (http://dmirman.github.io/GCA2018.html), including slides, examples, exercises, and exercise solutions. In addition to standard packages (ggplot2, lme4, etc.), we used my psy811 package for example data sets and helper functions.

Monday, December 12, 2016

Flattened logistic regression vs. empirical logit

I first learned about quasi-logistic regression and the "emprical logit" from Dale Barr's (2008) paper, which just happened to be right next to the growth curve analysis paper that Jim Magnuson, J. Dixon, and I wrote. I came to understand and like this approach in 2010 when Dale and I co-taught a workshop on analyzing eye-tracking data at Northwestern. I give that background by way of establishing that I'm positively disposed to the empirical logit method. So I was interested to read a new paper by Seamus Donnelly and Jay Verkuilen (2017) in which they point out some weaknesses of this approach and offer an alternative solution.

Thursday, October 6, 2016

New media and priorities

I was disappointed to read a (a draft of) a forthcoming APS Observer article by Susan Fiske in which she complains about how new media have allowed "unmoderated attacks" on individuals and their research programs. Other bloggers have written at some length about this (Andrew Gelman, Chris Chambers, Uri Simonsohn), I particularly recommend the longer and very thoughtful post by Tal Yarkoni. A few points have emerged as the most salient to me:

First, scientific criticism should be evaluated on its accuracy and constructiveness. Our goal should be accurate critiques that provide constructive ideas about how to do better. Efforts to improve the peer review process often focus on those factors, along with timeliness. As it happens, blogs are actually great for this: posts can be written quickly and immediately followed by comments that allow for back-and-forth so that any inaccuracies can be corrected and constructive ideas can emerge. Providing critiques in a polite way is a nice goal, but it is secondary. (Tal Yarkoni's post discusses this issue very well).

Second, APS is the publisher of Psychological Science, a journal that was once prominent and prestigious, but has gradually become a pop psychology punchline. Perhaps I should not have been surprised that they're publishing an unmoderated attack on new media.

Third, things have changed very rapidly (this is the main point of Andrew Gelman's post). When I was in graduate school (2000-2005), I don't remember hearing concerns about replication and standard operating procedures included lots of stuff that I would now consider "garden of forking paths"/"p-hacking". 2011 was a major turning point: Daryl Bem reported his evidence of ESP (side note: he was working on that since at least the mid-to-late 90's when I was undergrad at Cornell and heard him speak about it). At the time, the flaws in that paper were not at all clear. That was also the year a paper called “False-positive psychology” was published (in Psychological Science), which showed that “researcher degrees of freedom” (or "p-hacking") make actual false positive rates much higher than the nominal p < 0.05 values. The year after that, in 2012, Greg Francis's paper ("Too good to be true") came out showing that multi-experiment papers reporting consistent replications of small effect sizes are themselves very unlikely and may be reflecting selection bias, p-hacking, or other problems. 2012 also the year I was contacted by the Open Science Collaboration to contribute to their large-scale replication effort, which eventually led to a major report on the reproducibility of psychological research.

My point is that these issues, which are a huge deal now, were not very widely known even 5-6 years ago and almost nobody was talking about them 10 years ago. To put it another way, just about all tenured Psychology professors were trained before the term "p-hacking" even existed. So, maybe we should admit that all this rapid change can be a bit alarming and disorienting. But we're scientists, we're in the business of drawing conclusions from data, and the data clearly show that our old way of doing business has some flaws, so we should try to fix those flaws. Lots of good ideas are being implemented and tested -- transparency (sharing data and analysis code), post-publication peer review, new impact metrics for hiring/tenure/promotion that reward transparency and reproducibility. And many of those ideas came from those unmoderated new media discussions.

Thursday, September 15, 2016

Post-doctoral research position available

We are hiring a post-doctoral research fellow to start in 2017. Research in the lab focuses on spoken language processing and semantic memory in typical and atypical speakers. Current research projects investigate: (1) The processing and representation of semantic knowledge, particularly knowledge of object features and categories, and the events or situations in which they participate. (2) The organization of the spoken language system by mapping the relationships between stroke lesion location and behavioral deficits.

Research methods include:
  • behavioral and eye-tracking experiments
  • lesion-symptom mapping
  • computational modeling
  • non-invasive brain stimulation (tDCS)

Qualifications:
  • Doctorate degree in Psychology, Cognitive & Brain Science, CSD/SHLS, or related discipline. Must be completed before starting post-doctoral fellowship.
  • Experience with one or more of the research methods and/or content domains.
  • Programming experience in R, Matlab, python, or similar language will be preferred.

The post-doctoral fellow will be expected to contribute to ongoing projects and to develop an independent line of research. Mentorship, training, and professional development opportunities will be provided to facilitate the fellow’s future career in academic, research, or industry settings.

LCDL has recently relocated to the Department of Psychology at the University of Alabama at Birmingham. UAB is a comprehensive, urban research university, ranked among the top 25 in funding from the NIH. Postdoctoral training at UAB is enhanced by the Office ofPostdoctoral Education. The medical school is routinely ranked among the top in the US, and interdisciplinary programs are a particular strength, including the Psychology Department’s undergraduate and graduate neuroscience programs. Birmingham is a growing, diverse, and progressive city located in the foothills of the Appalachians. It was recently rated #1 Next Hot Food City by Zagat, it is home to several world-class museums and performing arts venues, and the region offers excellent sites for hiking, camping, boating, swimming, and fishing.

To Apply, submit the following
  • A letter of interest that describes your training, research experience and interests, and career goals
  • CV
  • 2-3 letters of recommendation

Applications will be considered until the position is filled. For full consideration please apply by November 1, 2016. Only complete applications will be considered. Questions and applications can be addressed to LCDL Director Dan Mirman.

Tuesday, March 1, 2016

MAPPD 2.0

About 5 or 6 years ago my colleagues at Moss Rehabilitation Research Institute and I made public a large set of behavioral data from language and cognitive tasks performed by people with aphasia. Our goal was to facilitate larger-scale research on spoken language processing and how it is impaired following left hemisphere stroke. We are pleased to announce that we have completed a thorough redesign of Moss Aphasia Psycholinguistics Project Database site. The MAPPD 2.0 interface is much simpler and easier to use, geared toward letting users download the data they want and analyze it themselves. 

The core of this database is single-trial picture naming and word repetition data for over 300 participants (including 20 neurologically intact control participants) with detailed target word and response information. The database also contains basic demographic and clinical information for each participant with aphasia, as well as performance on a host of supplementary tests of speech perception, semantic cognition, short-term/working memory, and sentence comprehension. A more detailed description of the included tests, coding schemes, and usage suggestions is available in our original description of the database (Mirman et al., 2010) and in the site's documentation.