Visual perception is typically performed in the context of a task or goal. Nonetheless, visual processing has traditionally been conceptualized in terms of a fixed, task-independent hierarchy of feature detectors. We explore the computational implications of allowing early visual processing to be task modulated. Using artifical neural networks, we show that significant improvements in task accuracy can be obtained by allowing the weights to be modulated by task. The primary benefits are obtained under resource-limited processing. A relatively modest taskbased modulation of weights and activities can lead to a large performance boost, suggesting an efficient means of increasing effective cortical capacity.
Michael C. Mozer, Adrian Fan