Spaced Repetition

Definition:

Spaced repetition is a memory-optimization technique in which study items are reviewed at progressively expanding intervals — short at first, then days, weeks, and months apart — timed to coincide with the moment just before forgetting would otherwise occur. By capitalizing on the psychological spacing effect, spaced repetition systems (SRS) achieve dramatically greater long-term retention per hour of study compared to massed practice (cramming). In second language acquisition, the technique is most commonly implemented through flashcard software — particularly Anki and LingQ — and underpins the vocabulary acquisition methodology of major self-directed learning communities including AJATT and Refold. Sakubo implements spaced repetition as its core review engine, using the FSRS algorithm.


The Spacing Effect

The spacing effect is one of the most robustly replicated findings in cognitive psychology: distributing practice across time produces better retention than an equivalent amount of massed practice in a single session. Ebbinghaus (1885) first documented the forgetting curve — the exponential decay of memory over time — and noted that re-exposure before complete forgetting produced stronger and more durable traces than re-exposure of already-forgotten material.

The mechanism is not fully settled, but two leading accounts exist:

  1. Study-Phase Retrieval: Each spaced practice trial requires partial retrieval of the target trace. The cognitive effort of this partial retrieval strengthens the trace more than passive re-exposure (Roediger & Karpicke, 2006). This is the same process underlying active recall.
  1. Encoding Variability: Spaced repetitions occur in slightly different mental contexts (different times of day, different emotional states, different adjacent information). This contextual variation creates a richer associative network around the memory, improving its retrievability under diverse future cues (Estes, 1955; Glenberg, 1979).

The Forgetting Curve and Optimal Interval

Ebbinghaus’s original model of forgetting assumed a universal exponential decay rate, but modern SRS theory recognizes that forgetting speed varies substantially by:

  • Item difficulty (harder items need shorter intervals)
  • Learner familiarity (items related to L1 knowledge decay more slowly via cross-linguistic transfer)
  • Retrieval success history (each successful retrieval extends the next optimal interval)

Early SRS algorithms such as SM-2 (Leitner Box systems computerized by Piotr Wozniak in SuperMemo) used fixed multipliers to extend intervals after successful recall. The FSRS algorithm (Free Spaced Repetition Scheduler, Wozniak & colleagues, 2022) represents the most current approach: a machine-learning-derived model that uses Retrievability, Stability, and Difficulty as the three primary variables governing interval scheduling. Sakubo uses FSRS rather than SM-2, giving it theoretically more efficient scheduling with lower daily review burden for the same retention rate.

In Second Language Acquisition

The application of spaced repetition to SLA is most directly relevant to vocabulary acquisition. Nation (2001) argued that vocabulary learning requires multiple encounters with a word (typically 10–20 for initial consolidation of form-meaning mapping), and that the quality of each encounter depends on both meaningfulness and spacing. SRS directly addresses the spacing dimension.

In practice, learners use SRS in two primary modes:

1. Pre-made decks: Frequency-ranked vocabulary decks (e.g., Core 2000 for Japanese, or the Anki 20k deck for Spanish) where the learner works through the most frequent words of the target language in priority order. This is the recommended Stage 0–1 approach in Refold.

2. Sentence mining: The learner creates custom flashcards from sentences encountered in authentic target-language content. Each card contains a full sentence with audio, an image from the source media, and a definition of the target word. This approach, popularized by AJATT and formalized by Refold, is considered superior to pre-made decks for long-term acquisition because it encodes words in personally meaningful context — aligning with the elaboration and involvement load principles of Laufer & Hulstijn (2001). Sentence mining is directly relevant to vocabulary depth, not merely breadth.

Relationship to Incidental Acquisition

SRS is an intentional learning technique. Its relationship to incidental vocabulary acquisition is complementary rather than competitive: SRS efficiently consolidates the core high-frequency vocabulary needed to make extensive reading comprehensible, while extensive reading provides the rich contextual input that deepens knowledge of words that have been SRS-introduced. Nation & Webb (2011) recommend explicitly pairing both approaches: use SRS for the top 2,000 frequency words (which cover approximately 95% of spoken text), then transition to reading-driven incidental acquisition for the long tail.


History

1885 — Ebbinghaus’s forgetting curve: Hermann Ebbinghaus‘s Über das Gedächtnis (Memory: A Contribution to Experimental Psychology) established the empirical basis for the spacing effect through systematic self-experimentation with nonsense syllable memorization. He documented both the forgetting curve and the “savings” produced by correctly timed re-exposure.

1932 — C. A. Mace suggests spacing as a study technique: Mace’s Psychology of Study was among the first popular texts to suggest students use spacing rather than cramming, predating SRS software by 50 years.

1939 — Leitner Card Box: Sebastian Leitner developed a physical card-sorting system with multiple boxes representing different review frequencies. Cards answered correctly moved to longer-interval boxes; incorrect cards returned to the daily box. This was the first practical mechanical implementation of spaced repetition for language vocabulary.

1972 — Spacing effect named and consolidated: Glenberg and others formally documented the spacing effect as a reliable psychological phenomenon across diverse material types.

1987 — SuperMemo and SM-2: Piotr Wozniak, a Polish student of biology, created the first computer-based SRS software (SuperMemo) and published the SM-2 algorithm, which calculated mathematically optimal review intervals using a fixed ease factor multiplier. SM-2 became the basis for virtually all SRS software that followed.

2006 — Anki: Damien Elmes released Anki as a free, open-source, cross-platform SRS application based on SM-2. Its open architecture — particularly the ability to share custom decks — made SRS accessible to millions of language learners worldwide. Anki became the primary tool of AJATT practitioners and later Refold communities.

2006 — AJATT and sentence mining: AJATT‘s emphasis on mining full sentences from native content — rather than word-translation pairs — dramatically shifted how SRS was applied to language learning, embedding vocabulary review in rich linguistic context.

2022 — FSRS: Wozniak and collaborators published the FSRS algorithm, applying modern machine learning to forgetting curve modeling. Unlike SM-2’s fixed ease multiplier, FSRS tracks three state variables (Retrievability, Stability, Difficulty) and uses a neural-network-derived formula for interval calculation. Studies comparing FSRS to SM-2 show measurably improved retention with fewer reviews. Sakubo adopted FSRS as its review engine.


Common Misconceptions

“SRS replaces reading and listening.”

SRS is a consolidation tool, not a primary input channel. Krashen (1985) and subsequent input-based researchers are clear that acquisition requires comprehensible input in context; SRS can accelerate retention of vocabulary forms already encountered but cannot substitute for the communicative context needed for deep form-meaning mapping.

“Bigger decks are better.”

Learners who add hundreds of new cards per day rapidly build unsustainable review backlogs. The consensus in AJATT and Refold communities is 10–20 new cards per day as a sustainable pace. Kornell & Bjork (2008) found that spacing effects are strongest when the learner is not overwhelmed; difficulty and cognitive load during review promote retention only up to a threshold.

“SRS works the same for all card types.”

Research distinguishes between simple recognition (seeing L2, producing L1 meaning) and production (seeing L1, producing L2). Both are valid, but they train different aspects: recognition ? reading and listening fluency; production ? writing and speaking fluency. Most SRS vocab learning is recognition-oriented, which is appropriate for input-focused methods.

“Anki and SRS are the same thing.”

Anki is an implementation; SRS is the underlying principle. Other implementations include LingQ‘s word status system, Sakubo‘s review engine, Bunpro (grammar SRS for Japanese), and the physical Leitner box.


Criticisms

  1. Gamification trap. Heavy SRS users sometimes report optimizing Anki metrics (review streak, due-card count) rather than language use. This is consistent with Dörnyei’s (2001) observation that instrumental motivation, when decoupled from communicative purpose, may sustain effort without producing communicative competence.
  1. Decontextualized review. Even sentence mining — the richest SRS format — strips sentences from their original narrative and audio context. Reviews are performed on isolated stimuli, which may not activate the same retrieval pathways as encountering words in fluent reading. This is an argument for supplementing SRS with extensive reading rather than replacing reading with SRS.
  1. Matthew effect risk. SRS efficiently consolidates vocabulary for learners who already have some foundation. But for learners with very low proficiency, even sentence-level cards may present too many unknown elements simultaneously, reducing the i+1 precision needed for effective acquisition (Krashen, 1985).
  1. Algorithm limitations. SM-2 and even FSRS have known failure modes: items with visually similar forms, items tied to episodic memories that fade, and multi-word collocations that require discourse-level context for full consolidation. These are active research areas in the SRS development community.

Social Media Sentiment

Spaced repetition occupies the most unambiguously positive position of any single technique in online language learning communities. Across r/languagelearning, r/LearnJapanese, r/Anki, and Discord servers for AJATT, Refold, and LingQ, SRS is treated as an empirically settled technique — the equivalent of compound interest for vocabulary.

The most common debate is not whether to use SRS but how: pre-made decks vs. sentence mining, recognition vs. production cards, optimal daily new-card rate, whether to mine from anime/manga/novels, and when to stop adding new cards. The FSRS vs. SM-2 discussion is ongoing in technically oriented communities, with FSRS advocates citing published retention data.

The most frequent criticism in social media is not about SRS effectiveness but about Anki usability — particularly its aging UI and steep learning curve for new users. Tools like Sakubo that implement FSRS with a more accessible interface are increasingly discussed as practical alternatives.

Last updated: 2026-04


Practical Application

  1. Start with a frequency deck for your target language (Japanese: Core 2000 or Kaishi 1K; Spanish: Anki 5k Frequency deck) to build the 1,000–2,000 high-frequency word foundation.
  2. Add 10–20 new cards per day. Sustainable pace > large batch. Review every day, even briefly.
  3. Transition to sentence mining once you’re consuming authentic content. Pull sentences that contain exactly one unknown word.
  4. Include audio and images. Multi-sensory encoding produces more robust retrieval pathways than text alone.
  5. Review on time. SRS only works if you complete your daily reviews. Skipping days compounds the backlog exponentially.
  6. For Japanese learners: use Sakubo for sentence-focused SRS with the FSRS algorithm — designed specifically for Japanese learners who want the benefits of sentence mining without the manual Anki deck configuration.
  7. Don’t equate review completion with learning. Your SRS session is a supplement to immersion, not a replacement. Pair with extensive reading and listening for maximum vocabulary depth.

Related Terms


Related Articles


See Also

  • LingQ — reading-first platform that implements spaced repetition through its word-status system
  • Extensive Reading — the natural complement to SRS-based vocab study
  • i+1 — the input difficulty principle that governs which sentences are worth mining
  • Sakubo

Research

  • Ebbinghaus, H. (1885). Über das Gedächtnis: Untersuchungen zur experimentellen Psychologie. Duncker & Humblot. [Trans. Ruger & Bussenius, 1913.] [Summary: The founding empirical study of memory. Documents the forgetting curve and the spacing effect through self-experimentation with nonsense syllables — the theoretical bedrock of all SRS methodology.]
  • Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354–380. [Summary: Meta-analysis of 254 studies confirming the spacing effect across diverse material types and learner ages, with practical guidelines for optimal study-to-test interval ratios.]
  • Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255. [Summary: Demonstrates that retrieval practice (testing oneself) prduces greater retention than re-studying — the “testing effect” that underpins active recall and SRS review mechanics.]
  • Nation, I. S. P. (2001). Learning Vocabulary in Another Language. Cambridge University Press. [Summary: Comprehensive framework for vocabulary acquisition — advocates pairing intentional learning (SRS) with meaning-focused input (extensive reading) and argues for frequency-ranking vocabulary priorities.]
  • Nation, P., & Webb, S. (2011). Researching and Analyzing Vocabulary. Heinle. [Summary: Extends Nation’s vocabulary framework with research methodology, including studies on incidental acquisition rates and the complementarity of SRS with reading-driven acquisition.]
  • Laufer, B., & Hulstijn, J. H. (2001). Incidental vocabulary acquisition in a second language: The construct of task-induced involvement. Applied Linguistics, 22(1), 1–26. [Summary: Proposes the Involvement Load Hypothesis — cards that require need, search, and evaluation produce deepest learning, supporting richly contextual sentence cards over simple word-translation pairs.]
  • Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories: Is spacing the “enemy of induction”? Psychological Science, 19(6), 585–592. [Summary: Examines when spacing helps vs. potentially hurts; finds that for vocabulary and factual items (as opposed to abstract concept induction), spacing consistently aids retention.]
  • Wozniak, P. A. (1990). Optimization of learning. Master’s thesis, University of Technology, Poznan. [Summary: The original theoretical statement of SM-2, the algorithm underlying SuperMemo and early Anki scheduling. Establishes the mathematical framework for optimal interval calculation.]
  • Ye, W., Zhang, Q., & Liu, J. (2022). A new algorithm for memory scheduling: Free Spaced Repetition Scheduler (FSRS). arXiv preprint. [Summary: Describes FSRS, its three-variable state model (Retrievability, Stability, Difficulty), and comparison studies showing improved retention efficiency vs. SM-2.]
  • Krashen, S. D. (1985). The Input Hypothesis: Issues and Implications. Longman. [Summary: While not specifically about SRS, Krashen’s framework sets the context in which SRS vocabulary learning sits — as a supplement to comprehensible input, not a replacement for it.]