Peter Skehan

Definition:

Peter Skehan is a British applied linguist and professor whose work bridges foreign language aptitude research and task-based language teaching through a cognitive-processing framework, best known for the Complexity-Accuracy-Fluency (CAF) triadic framework and the Trade-Off Hypothesis, which proposes that learners operate with a limited attentional pool that forces them to allocate resources among the competing demands of producing advanced or restructured language (complexity), accurate language (accuracy), and fluent language (fluency) during L2 task performance—with task design determining which dimension receives attentional priority. His major works include A Cognitive Approach to Language Learning (1998) and extensive research on aptitude, task performance, and the conditions under which tasks promote restructuring of the L2 system versus practice of existing patterns.


In-Depth Explanation

The CAF Triadic Framework:

Skehan (1998) proposed that L2 oral production can be analyzed along three independent dimensions:

  • Complexity: The elaborateness, variety, or sophistication of the language produced — use of subordination, novel vocabulary, risk-taking with new structures. Often operationalized as clause length, subordination index, or lexical diversity (type-token ratio).
  • Accuracy: Freedom from error — the degree to which the language produced conforms to target-language norms. Operationalized as percentage of error-free clauses, error-free AS-units.
  • Fluency: The speed and ease of production — how smoothly and quickly speech is produced without pauses, reformulations, or hesitations. Operationalized as speech rate, pause frequency, repair frequency.

The Trade-Off Hypothesis:

Skehan’s Trade-Off Hypothesis (1998, 2009) proposes that:

  • L2 learners have a limited pool of attentional resources for simultaneous demands.
  • Under task demands that push towards all three simultaneously (complex subject matter + accuracy requirements + time pressure), learners cannot excel at all three.
  • Trade-offs occur: if a task rewards complexity (e.g., argue a nuanced position), accuracy may suffer; if a task rewards accuracy (e.g., precise grammar-focused task), complexity may be sacrificed.
  • Task design can strategically prioritize one dimension: planning time tends to increase complexity and fluency; post-task performance pressure increases accuracy; resource-directing instructions can each favor a different dimension.

Two key debates in the field:

  1. Skehan vs. Robinson (Cognition Hypothesis):

Robinson (2001) proposed the opposite prediction for cognitive task difficulty: more complex tasks (by his “cognition hypothesis”) free up resources in certain ways that promote both complexity and accuracy together. The Skehan-Robinson debate is ongoing and empirically unresolved — studies support different predictions under different task conditions.

  1. Measuring CAF:

The operationalization of complexity, accuracy, and fluency varies substantially across studies (different measures for each dimension), making meta-analytic comparison difficult. Norris and Ortega’s (2009) meta-analysis called for standardization of CAF measurement.

Foreign Language Aptitude:

Before his CAF work, Skehan made major contributions to aptitude research:

  • Carroll and Sapon (1959): Modern Language Aptitude Test (MLAT) identified four components: phonemic coding ability, rote learning ability, inductive language learning ability, grammatical sensitivity.
  • Skehan (1989, 1998): Updated aptitude framework arguing that aptitude components map onto stages of processing — phonemic coding ability relevant to input; grammatical sensitivity relevant to noticing patterns; memory to retention; inductive learning ability to rule formation.
  • Aptitude-Treatment Interaction (ATI): Different instructional methods may be differentially effective for learners with different aptitude profiles — explicit instruction benefits learners with high grammatical sensitivity; implicit methods benefit learners with high memory capacity.

The Dual-Mode System:

Skehan (1998) proposed a dual-mode model of L2 production:

  • Rule-based mode: Learners generate language through formulation of grammatical rules — slower, more accurate, more cognitively costly.
  • Memory/exemplar mode: Learners retrieve pre-formed chunks, collocations, and exemplars from memory — faster, less cognitively costly, approximates fluency.
  • Development requires both modes: a purely rule-based system lacks fluency; a purely memory-based system lacks flexibility and productive generativity.
  • Task design can push learners toward one mode or the other: time-pressured tasks push toward memory/chunk retrieval (increasing fluency); planning-enabled tasks free resources for rule-based generation (increasing complexity).

Japanese L2 context:

  • Scripts in Japanese (hiragana, katakana, kanji) impose a literacy component to language learning that creates aptitude-relevant demands beyond Carroll and Sapon’s MLAT framework, which was designed for Indo-European L2 acquisition.
  • CAF research has been extended to Japanese L2 writing — kanji selection, morphological choice (e.g., the te-form vs. dictionary form), and sentence-final predicate complexity measures in Japanese compositions.
  • Task difficulty in Japanese is heightened by orthographic complexity — writing tasks are significantly more cognitively demanding due to kanji retrieval; spoken tasks have a different load profile.

History

  • 1989: Individual Differences in Second Language Learning — aptitude components and individual differences.
  • 1998: A Cognitive Approach to Language Learning — CAF framework; Trade-Off Hypothesis; dual-mode system; central statement of Skehan’s theoretical model.
  • 2001–2010: Ongoing debate with Robinson over Cognition Hypothesis vs. Trade-Off Hypothesis; multiple empirical studies testing predictions of each.
  • 2009: Norris and Ortega call for CAF measurement standardization.
  • 2014: Skehan — “The Complexity-Accuracy-Fluency Trade-Off Hypothesis” — refined statement.

Common Misconceptions

“More complex tasks always produce better language.” Skehan’s whole point is that more cognitively demanding tasks may suppress accuracy and fluency even if they promote complexity — task design must specify which dimension is the pedagogical priority.

“CAF dimensions are correlated.” Skehan’s data consistently show they are not — a gain in complexity often comes at the cost of accuracy or fluency. This is precisely what motivated the Trade-Off Hypothesis.


Criticisms

  • The Trade-Off Hypothesis has not been universally supported empirically — some studies find gains in complexity and accuracy together, contradicting the trade-off prediction.
  • CAF measurement inconsistency across studies makes it difficult to accumulate a coherent evidence base.
  • The dual-mode model is a descriptive metaphor rather than a computationally specified cognitive model — its predictions are underspecified.

Social Media Sentiment

Skehan’s CAF framework is widely taught in TEFL/TESOL training programs. Teachers discuss CAF dimensions when designing tasks and evaluating learner output. The trade-off concept is intuitively resonant with teachers who observe that when learners take risks with complex language, errors increase — and that accuracy-focused learners produce safe, simple sentences. The Skehan-Robinson debate is less visible in practitioner communities but is central in graduate SLA courses.

Last updated: 2026-04


Practical Application

  • Use planning time strategically: Pre-task planning (especially strategic planning — thinking about what to say) tends to increase complexity and sometimes fluency. Assigned planning time is one of the most empirically supported task design variables.
  • Post-task performance pressure for accuracy: If accuracy is the target, make learners aware they will present or submit a revised version — post-task conditions push attention toward accuracy during task performance.
  • Match task complexity to learner proficiency: Tasks with excessive processing demands deplete resources across all CAF dimensions — match cognitive load to learner capacity to enable productive performance without overwhelming.
  • Sequence chunk-based and rule-based tasks: Alternating tasks that push chunk retrieval (fluency) with tasks that require novel construction (complexity) develops both modes in Skehan’s dual-mode framework.

Related Terms


See Also


Research

Skehan, P. (1998). A Cognitive Approach to Language Learning. Oxford University Press. [Summary: CAF triadic framework; Trade-Off Hypothesis; dual-mode system; aptitude components; task design and attentional resource allocation; foundational theory for task-based language teaching cognitive research.]

Carroll, J. B., & Sapon, S. (1959). Modern Language Aptitude Test. The Psychological Corporation. [Summary: MLAT; four aptitude components (phonemic coding, rote learning, inductive learning, grammatical sensitivity); first systematic psychometric measure of L2 aptitude; foundational for aptitude research Skehan built on.]

Robinson, P. (2001). Task complexity, task difficulty, and task production: Exploring interactions in a componential framework. Applied Linguistics, 22(1), 27–57. [Summary: Cognition Hypothesis; complex tasks proposed to promote both complexity and accuracy; direct contrast to Skehan’s Trade-Off prediction; central debate in task complexity research.]

Skehan, P. (2009). Modelling second language performance: Integrating complexity, accuracy, fluency, and lexis. Applied Linguistics, 30(4), 510–532. [Summary: Refined CAF model; CAFL (adding lexis); integration of linguistic and psycholinguistic evidence; response to Robinson debate; updated theoretical statement.]

Norris, J. M., & Ortega, L. (2009). Towards an organic approach to investigating CAF in instructed SLA: The case of complexity. Applied Linguistics, 30(4), 555–578. [Summary: CAF measurement critique; call for standardization; meta-analytic challenges from inconsistent operationalization; proposed framework for cumulative CAF research.]