A Switching heuristics approximates Bayesian inference in a motion direction estimation task
Github Steeve laquitaine

Switching as a heuristic approximation to Bayesian inference

Content

Abstract
Task
Data
Q&A

Abstract

Human perceptual inference has been fruitfully characterized as a normative Bayesian process in which sensory evidence and priors are multiplicatively combined to form posteriors from which sensory estimates can be optimally read out. We tested whether this basic Bayesian framework could explain human subjects’ behavior in two estimation tasks in which we varied the strength of sensory evidence (motion coherence or contrast) and priors (set of directions or orientations). We found that despite excellent agreement of estimates mean and variability with a Basic Bayesian observer model, the estimate distributions were bimodal with unpre- dicted modes near the prior and the likelihood. We developed a model that switched between prior and sensory evidence rather than integrating the two, which better explained the data than the Basic and several other Bayesian observers. Our data suggest that humans can approximate Bayesian optimality with a switching heuristic that forgoes multiplicative combination of priors and likelihoods.

Task

Brownian motion

Run the task

The matlab code is in projInference/task/.

  1. git clone mgl and mrtools libraries at https://github.com/justingardner and SLcodes at https://github.com/steevelaquitaine/
  2. set screen parameters (mglEditScreenParams, screenNumber = 1, can be changed in taskDotDir.m)
  3. open main.m and run each prior block line by line

Data

  1. Download "data01_direction4priors.zip" from mendeley
  2. Add /projInference's datapath to matlab path (mine is ~/Desktop/projInference)
  3. Unzip the file in /projInference/data/
  4. fit the data

To visualize the Standard Bayesian model's predictions (subject 1):

addpath(genpath('~/Desktop/projInference/'));
datapath = '~/Desktop/projInference/data/data01_direction4priors/data';
SLfitBayesianModel({'sub01'},[100 3 3 1 2.5 7.7 43 NaN 0.001 15 NaN ],'dataPathVM',datapath,'experiment','vonMisesPrior','MAPReadout','modelPredictions','withData','inputFitParameters', 'filename','myfile');

Q&A

Why is estimation standard deviation higher clockwise to the prior mean (at -140) than counterclockwise (at +140) for 80º prior? Why are the models' fits not smooth (fig 3 and 5)?

This is an artefact in the plotted standard deviation created by numerical imprecision when performing polar to cartesian conversions for this plot. 90 degree was converted to misestimated cartesians (x=-4e-8, y=1) instead of (x=0, y=1). Once converted back to polar with atan(y/x), negative x led to -90 degrees instead of 90 degrees. A corrected plot will be uploaded soon. Similar numerical imprecisions make the curves not smooth.

Could detection sometimes fails ?

A Bayesian model where detection sometimes fails, would combine the prior with a uniform likelihood in trials when detection failed, producing an estimate peak centered at the prior mean and combines the prior with a noiseless (a delta function) or an extremely strong likelihood when detection succeeds, producing an estimate peak centered at the motion direction. Whether the likelihood function, represented by inherently noisy neural populations, can be noiseless, or nearly noiseless enough to entirely bias estimate toward sensory evidence is a plausible explanation remains to be determined.

We cannot preclude the possibility that this or other distributional forms would allow a Bayesian model with multiplicative integration to better explain the data.

Can we check the detection hypothesis by measuring likelihood strength in a 2AFC?

In a 2AFC task, subjects would for example indicate whether a test motion stimulus with 6% coherence moved clockwise or counterclockwise to a reference 100% coherence motion stimulus in repeated trials. In a simple task the prior is uniform and motion directions are sampled out of a uniform distribution. The sensory likelihood width can be fitted to subjects' responses. It is not clear though how/whether the width can be derived for each individual trial which is necessary to determined whether likelihood is sometimes flat (no detection) or strong (detection).

Why is orientation estimation so hard at 15% contrast?

Possible reason why orientation estimation was hard at 15% contrast are that the stimulus was a 1) thin bar, 2) it was filled with filtered noise 3) it appeared for only 20 ms 4) in one of 36 different possible spatial orientations. Thus contrast was only one of the many noise factor.

Why is the switching model able to fit the SD so well and not predict higher SD like for cue switching in the cue-integration literature?

The most likely reason is that our switching model has more free parameters and that allows it to avoid that problem to some extent.

Did we run the task without feedback ?

We did run pilot experiments without feedback but that made the task much harder for subjects particularly at low coherence (6%) - pilot subjects' estimate distributions were mostly random. Some subjects also displayed unexplained biases. We also reasoned that prior would be easier to learn and thus learning would be faster (which was essential to test our hypothesis) if we used feedbacks, again particularly at low coherence.

Did subjects see the edges of the monitor ?

No subjects' eye distance to the monitor prevented them to see its edges. We also ensured subjects viewed the stimuli in a dark room with no visual distractors. The fixation point was also circular. These dispositions were taken to make sure that subjects couldn't use cardinal axes (edges, fixation cross) as references to estimate the motion directions.

What were subjects instructions ?

Subjects were instructed as follows: