The Acquisition of Memories and
the Working-Memory System
Acquisition, Storage, and Retrieval How does new information-whether it's a friend's phone number or a fact you hope to memorize for the bio exam-become established in memory? Are there ways to learn that are particularly effective? Then, once information is in storage, how do you locate it and "reactivate" it later? And why does search through memory sometimes fail-so that, for example, you forget the name of that great restaurant downtown (but then remember the name when you're midway through a mediocre dinner someplace else)?
In tackling these questions, there's a logical way to organize our inquiry. Before there can be a some new information. Therefore, acquisition-the process memory, you need to gain, or "acquire," of gaining information and placing it into memory-should be our first topic. Then, once you've acquired this information, you need to hold it in memory until the information is needed. We refer to this as the storage phase. Finally, you remember. In other words, you somehow locate the information in the vast warehouse that is memory and you bring it into active use; this is called retrieval. This organization seems logical; it fits, for example, with the way most "electronic memories" (e.g., computers) work. Information ("input") is provided to a computer (the acquisition phase). The information then resides in some dormant form, generally on the hard drive or perhaps in the cloud (the storage phase). Finally, the information can be brought back from this dormant form, often via a search process that hunts through the disk (the retrieval phase). And there's nothing special about the computer comparison here; "low-tech" information storage works the same way. Think about a file drawer-information is acquired (i.e., filed), rests in this or that folder, and then is retrieved.
Guided by this framework, we'll begin our inquiry by focusing on the acquisition of new memories, leaving discussion of storage and retrieval for later. As it turns out, though, we'll soon find reasons for challenging this overall approach to memory. In discussing acquisition, for example, we might wish to ask: What is good learning? What guarantees that material is firmly recorded in memory? As we'll see, evidence indicates that what counts as "good learning" depends on how the memory is to be used later on, so that good preparation for one kind of use may be poor preparation for a different kind of use. Claims about acquisition, therefore, must be interwoven with claims about retrieval. These interconnections between acquisition and retrieval will be the central theme of Chapter 7. In the same way, we can't separate claims about memory acquisition from claims about memory storage. This is because how you learn (acquisition) depends on what you already know (information in storage). We'll explore this important relationship in both this chapter and Chapter 8.
We begin, though, in this chapter, by describing the acquisition process. Our approach will be roughly historical. We'll start with a simple model, emphasizing data collected largely in the 1970s. Well then use this as the framework for examining more recent research, adding r inements to the model as we proceed. e. Demonstration 6.1: Primacy and Recency Effects The text describes a theoretical model in which working memory and long-term memory are distinct from each other, each governed by its own principles. But what's the evidence for this distinction? Much of the evidence comes from an easily demonstrated data pattern.
Read the following list of 25 words out loud, at a speed of roughly one second per word. (Before you begin, you might start tapping your foot at roughly one tap per second, and then keep tapping your foot as you read the list; that will help you keep up the right rhythm.) HIDE
Now, close the list so you can't see it anymore, and write down as many words from the list as you can remember, in any order.
Open the list, and compare your recall with the actual list. How many words did you remember? Which words did you remember?
· Chances are good that you remembered the first three or four words on the list. Did you? The textbook chapter explains why this is likely
· Chances are also good that you remembered the final three or four words on the list. Did you? Again, the textbook chapter explains why this is likely.
· Even though you were free to write down the list in any order you chose, it's very likely that you started out by writing the words you'd just read-that is, the first words you wrote were probably the last words you read on the list. Is that correct?
The chapter doesn't explain this last point, but the reason is straightforward. At the end of the list, the last few words you'd read were still in your working memory, simply because you'd just been thinking about these words, and nothing else had come along yet to bump these items out working memory. The minute you think about something else, though, that "something else" will occupy working memory and will displace these just-heard words. With that base, imagine what would happen if, at the very start of your recall, you tried to remember, say, the first words on the list. This effort will likely bring those words into your thoughts, and so now these words are in working memory-bumping out the words that were there and potentially causing you to lose track of those now-displaced words. To avoid this problem, you probably started your recall by "dumping" your working memory's current contents (the last few words you read) onto the recall sheet. Then, with the words preserved in this way, it didn't matter if you displaced them from working memory, and you were freed to go to work on the other words from the list.
· Finally, it's likely that one or two of the words on the list really "stuck" in your memory, even though the words were neither early in the list (and so didn't benefit from primacy) nor late on the list (and so didn't benefit from recency). Which words (if any) stuck in your memory in this way? Why do you think this is? Does this fit with the theory in the text?
The Route into Memory For many years, theorizing in cognitive psychology focused on the process through which information was perceived and then moved into memory storage-that is, on the process of information acquisition. One early proposal was offered by Waugh and Norman (1965). Later refinements were added by Atkinson and Shiffrin (1968), and their version of the proposal came to be known as the modal model. Figure 6.1 provides a simplified depiction of this model. Updating the Modal Model
According to information first arrives, it is stored briefly in modal the model, when sensory memory. This form of memory holds on to the input in "raw" sensory form-an iconic memory for visual inputs and an echoic memory for auditory inputs. A process of selection and interpretation then moves the information into short-term memory-the place where you hold information while you're working on it. Some of the information is then transferred into long- much larger and term memory, a more permanent storage place. This proposal captures some important truths, but it needs to be updated in several ways. First, the idea of "sensory memory" plays a much smaller role in modern theorizing, So modern discussions of perception (like our discussion in Chapters 2 and 3) often make no mention of this memory. (For recent a assessment of visual sensory memory, though, Cappiello & Zhang, 2016.) Second, modern proposals use the term working memory rather than "short-term memory," to see emphasize the thoughts in this memory are currently activated, currently function of this memory. Ideas or being thought about, and so you're currently working on. Long-term memory (LTM), in contrast, is the vast repository that contains all of your knowledge and all of your beliefs-most of which you aren't thinking about (i.e., they're the ideas aren't working on) at this moment.
The modal model also needs updating in another way. Pictures like the one in Figure 6.1 suggest that working memory is a storage place, sometimes described as the "loading dock" just outside of the long-term memory "warehouse." The idea is that information has to "pass through" working memory on the way into longer-term storage. Likewise, the picture implies that memory retrieval involves the "movement" of information out of storage and back into working memory.
In contrast, contemporary theorists don't think of working memory as a working memory is (as we will see) simply the name we give to a status. Therefore, when we say that ideas are "in working memory," we simply "place" at all. Instead mean that these ideas are currently activated and being set worked on by a specific of operations.
We’ll have more to say about this modern perspective before we're through. It's important to emphasize, though, that contemporary thinking also preserves some key ideas from the modal model, including its claims about how working memory and long-term memory differ from each other. Let's identify those differences. First, working memory is limited in size; long-term memory is enormous. In fact, long-term memory has to be enormous, because it contains all of your knowledge-including specific knowledge (e.g., how many siblings you have) and more general themes (e.g., that water is wet, that Dublin is in Ireland, that unicorns don't exist). Long-term memory also contains all of your "episodic" knowledge-that is, your knowledge about events, including events early in your life as well as more experiences.
Second, getting information into working memory is easy. If you think about a particular idea or recent e of content, then you're "working on" that idea or content, and so this information- some other by definition-is now in your working memory. In contrast, we'll see later in the chapter that getting information into long-term memory often involves some work. Third, getting information out of working memory is also easy. Since (by definition) this memory holds the ideas you're thinking about right now, the information is already available to you. Finding information in long-term memory, in contrast, can sometimes be difficult and slow-and in some settings can fail completely.
Fourth, the contents of working memory are quite fragile. Working memory, we emphasize, contains the ideas you're thinking about right now. If your thoughts shift to a new topic, therefore, the new ideas will enter working memory, pushing out what was there a moment ago. Long-term memory, in contrast, isn't linked to your current thoughts, so it's much less fragile-information remains in storage whether you're thinking about it right now or not.
We can make all these claims more concrete by looking at some classic research findings. These findings come from a task that's quite artificial (i.e., not the sort of memorizing you do every day) but also quite informative. Working Memory and Long-Term Memory: One Memory or Two? In many studies, researchers have asked participants to listen to a series of words, such as "bicycle artichoke, radio, chair, palace." In a typical experiment, the list might contain 30 words and be presented at a rate of one word per second. Immediately after the last word is read, the participants must repeat back as many words as they can. They are free to report the words in any order they choose, which is why this task is called a free recall procedure. People usually remember 12 to 15 words in this test, in a consistent pattern. They're very likely to remember the first few words on the list, something known as the primacy effect, and they're also likely to remember the last few words U-shaped curve describing the relation on the list, a recency effect. The resulting pattern is a between positions within the series-or serial position-and the likelihood of recall (see Figure 6.2 Baddeley & Hitch, 1977; Deese & Kaufman, 1957; Glanzer & Cunitz, 1966; Murdock, 1962; Postman & Phillips, 1965).
Explaining the Recency Effect What produces this pattern? We've already said that working memory contains the material someone is working on at just that moment. In other words, this memory contains whatever the person is currently thinking about; and during the list presentation, the participants are thinking about the words they're hearing. Therefore, it's these words that are in working memory. This memory, however, is limited in size, capable of holding only five or six words. Consequently, as participants try to keep up with the list presentation, they'll be placing the words just heard into working memory, and this action will bump the previous words out of working memory. As a result, as participants proceed through the list, their working memories will, at each moment, contain only the half dozen words that arrived most recently. Any words that arrived earlier than these will have been pushed out by later arrivals.
Of course, the last few words on the list don't get bumped out of working memory, because no further input arrives to displace them. Therefore, when the list presentation ends, those last few words stay in place. Moreover, our hypothesis is that materials in working memory are readily available-easily and quickly retrieved. When the time comes for recall, then, working memory's contents (the list's last few words) are accurately and completely recalled.
The key idea, then, is that the list's last few words are still in working memory when the list ends (because nothing has arrived to push out these items), and we know that working memory's contents are easy to retrieve. This is the source of the recency effect. Explaining the Primacy Effect The primacy effect has a different source. We've suggested that it takes some work to get information into long-term memory (LTM), and it seems likely that this work requires some time and attention. So let's examine how participants allocate their attention to the list items. As participants hear the list, they do their best to be good memorizers, and so when they hear the first word, they repeat it over and over to themselves ("bicycle, bicycle, bicycle")-a process known as memory rehearsal. When the second word arrives, they rehearse it, too ("bicycle, artichoke, bicycle, artichoke"). Likewise for the third ("bicycle, artichoke, radio, bicycle, artichoke, radio"), and so on through the list. Note, though, that the first few items on the list are privileged. For a brief moment, "bicycle" is the only word participants have to worry about, so it has 100% of their attention; no other word receives this privilege. When "artichoke" arrives a moment later, participants divide their attention between the first two words, so "artichoke" gets only 50% of their attention-less than "bicycle" got, but still a large share of the participants' efforts. When "radio" arrives, it has to compete with "bicycle" and "artichoke" for the participants' time, and so it receives only 33% of their attention. Words arriving later in the list receive even less attention. Once six or seven words have been presented, the participants need to divide their attention among all these words, which means that each one receives only a small fraction of the participants' focus. As a result, words later in the list are rehearsed fewer times than words early in the list-a fact that can be confirmed simply by asking to rehearse out loud (Rundus, 1971). participants
This view of things leads immediately observed memory advantage for the early list items. These early words didn't have to share attention with other words (because the other words hadn't arrived yet), were devoted to them than to any others. This means that the early words have a greater chance of to our explanation of the primacy effect-that is, the so more time and more rehearsal greater chance of being recalled after a delay. That's what being transferred into LTM-and so a shows up in these classic data as the primacy effect. Testing Claims about Primacy and Recency This account of the serial-position curve leads to many predictions. First, we're claiming the recency portion of the curve is coming from working memory, while other items on the list are being recalled from LTM. Therefore, manipulations of working memory should affect recall of the recency items but not items earlier in the list. To see how this works, consider a modification of our procedure. In the standard setup, we allow participants to recite what they remember immediately after the list's end. But instead, we can delay recall by asking participants to perform some other task before they report the list items-for example, we can ask them to count backward by threes, starting from 201. They do this for just 30 seconds, and then they try to recall the list.
We've hypothesized that at the end of the list working memory still contains the last few items heard from the list. But the task of counting backward will itself require working memory (e.g., to keep track of where you are in the counting sequence). Therefore, this chore will displace working memory's current contents; that is, it will bump the last few list items out of working memory. As a result, these items won't benefit from the swift and easy retrieval that working memory allows, and, of course, that retrieval was the presumed source of the recency effect. On this basis, the simple chore of counting backward, even if only for a few seconds, will eliminate the recency effect. In contrast, the counting backward should have no impact on recall of the items earlier in the list: These items are (by hypothesis) being recalled from long-term memory, not working memory, and there's no reason to think the counting task will interfere with LTM. (That's because LTM, unlike working memory, isn't dependent on current activity.) Figure 6.3 shows that these predictions are correct. An activity interpolated, or inserted, between the list and recall essentially eliminates the recency effect, but it has no influence elsewhere in the list (Baddeley & Hitch, 1977; Glanzer & Cunitz, 1966; Postman & Phillips, 1965). In contrast, merely delaying the recall for a few seconds after the list's end, with no interpolated activity, has no impact. In this case, participants maintain them in working memory. With no new materials coming in, nothing pushes the recency can continue rehearsing the last few items during the delay and so can items out of working memory, and so, even with a delay, a normal recency effect is observed.
We'd expect a different outcome, though, if we manipulate long-term memory rather than working memory. In this case, the manipulation should affect all performance except for recency (which, again, is dependent on down the presentation of the list? Now, participants will have more time to spend on all of the list items, increasing the likelihood of transfer into more permanent storage. This should improve recall working memory, not LTM). For example, what happens if we slow for all items coming from LTM. Working memory, in contrast, is limited by its size, not by ease of entry or ease of access. Therefore, the slower list presentation should have no influence on working- memory performance. Research results confirm these claims: Slowing the list presentation improves retention of all the pre-recency items but does not improve the recency effect (see Figure 6.4).
Other variables that influence long-term memory have similar effects. Using more familiar or more common words, for example, would be expected to ease entry into long-term memory and does improve pre-recency retention, but it has no effect on recency (Sumby, 1963).
It seems, therefore, that the recency and pre-recency portions of the curve are influenced by distinct sets of factors and obey different principles. Apparently, then, these two portions of the curve are the products of different mechanisms, just as our theory proposed. In addition, FMRI scans suggest that memory for early items on a list depends hippocampus) that are associated with long-term memory; memory for later items on the list do not show this pattern (Talmi, Grady, Goshen-Gottstein, & Moscovitch, 2005; also Eichenbaum, 2017; see on brain areas (in and around the Figure 6.5). This provides further confirmation for our memory model.
A Closer Look at Working Memory Earlier, we counted four fundamental differences between working memory and LTM-the size of these two stores, the ease of entry, the ease of retrieval, and the fact that working memory is dependent on current activity (and therefore fragile) while LTM is not. These are all points proposed by the modal model and preserved in current thinking. As we've said, though, investigators' understanding of working memory has developed over the years. Let's examine the newer conception in more detail. The Function of Working Memory Virtually all mental activities require the coordination of several pieces of information. Sometimes the relevant bits come into view one by one, so that you need to hold on to the early-arrivers until the rest of the information is available, and only then weave all the bits together. Alternatively sometimes the relevant bits are all in view at the same time-but you still need to hold on to them together, so that you can think about the relations and combinations. In either case, you'll end up with multiple ideas in your thoughts, all activated simultaneously, and thus several bits of information in the status we describe as "in working memory." (For more on how you manage to focus on these various bits, see Oberauer & Hein, 2012.) Framing things in this way makes it clear how important working memory is: You use it whenever you have multiple ideas in your mind, multiple elements that you're trying to combine or compare. Let's now add that people differ in the "holding capacity" of their working memories. Some people more elements, and some with fewer. How does this matter? to determine if your (and work with) are able to hold on to To find out, we first need a way of measuring working memory's capacity, memory capacity is above average, below, this measurement, however, has changed or somewhere in between. The procedure for obtaining over the years; looking at this change will help clarify what working memory is, and what working memory is for. Digit Span For many years, the holding capacity of working memory was measured with a digit-span task. In this task, research participants hear a series of digits read to them (e.g., "8, 3, 4") and must immediately repeat them back. If they do so successfully, they're given a slightly longer list (e.g., "9, 2,4, 0"). If they can repeat this one without error, they're given a still longer list ("3, 1, 2, 8, 5"), and so on. The procedure continues until the participant starts to make errors-something that usually happens when the list contains more than seven or eight items. The number of digits the person can echo back without errors is referred to as that person's digit span.
Procedures such as this imply that working memory's capacity is typically around seven items-at least five and probably not more than nine. These estimates have traditionally been summarized by the statement that this memory holds "7 plus-or-minus 2" items (Chi, 1976; Dempster, 1981; Miller, 1956; Watkins, 1977).
However, we immediately need a refinement of these measurements. If working memory can hold 7 plus-or-minus 2 items, what exactly is an "item"? Can people remember seven sentences as easily as seven words? Seven letters as easily as seven equations? In a classic paper, George Miller (one of the founders of the field of cognitive psychology) proposed that working memory holds 7 plus-or-minus 2 chunks (Miller, 1956). The term "chunk" doesn't sound scientific or technical, and that's useful because this informal terminology reminds us that a chunk doesn't hold a fixed quantity of information. Instead, Miller proposed, working memory holds 7 plus-or-minus 2 packages, and what those packages contain is largely up to the individual person. The flexibility in how people "chunk" input can easily be seen in the span test. Imagine that we test someone's "letter span" rather than their "digit span," using the procedure already described. So the person might hear "R, L" and have to repeat this sequence back, and then "F, C, H," and so on. Eventually, let's imagine that the person hears a much longer list, perhaps one starting "H, O, P, T, R A, S, L, U... If the person thinks of these as individual letters, she'll only remember 7 of them, more or less. But she might reorganize the list into "chunks" and, in particular, think of the letters as forming syllables ("HOP, TRA, SLU, . . ."). In this case, she'll still remember 7 plus-or-minus 2 items but the items are syllables, and by remembering the syllables she'll be able to report back at least a dozen letters and probably more.
howHow far can this process be extended? Chase and Ericsson (1982; Ericsson, 2003) studied a remarkable individual who happens to be a fan of track events. When he hears numbers, he thinks of them as finishing times for races. The sequence "3, 4, 9, 2," for example, becomes "3 minutes and 49.2 seconds, near world-record mile time." In this way, four digits become one chunk of information. This person can then retain 7 finishing times (7 chunks) in memory, and this can involve 20 or 30 digits! Better still, these chunks can be grouped into larger chunks, and these into even larger chunks. For example, finishing times for individual racers can be chunked together into heats within track meet, so that, now, 4 or 5 finishing times (more than a dozen digits) become one chunk. With strategies like this and a lot of practice, this person has increased his apparent memory span from the "normal" 7 digits to 79 digits. However, let's be clear that what has changed through practice is merely this person's chunking strategy, not the capacity of working memory itself. This is evident in the fact that when tested with sequences of letters, rather than numbers, so that he can't use his chunking strategy, this individual's memory span is a normal size-just 6 consonants. Thus, the 7-chunk limit is still in place for this man, even though (with numbers) he's able to make extraordinary use of these 7 slots.
Chunking provides one complication in our measurement of working memory's capacity. Another- and deeper-complication grows out of the very nature of working memory. Early theorizing about working memory, as we said, was guided by the modal model, and this model implies that working memory is something like a box in which information is stored or a location in which information can be displayed. The traditional digit-span test fits well with this idea. If working memory is like a box, then it's sensible to ask how much "space" there is in the box: How many slots, or spaces, are there in it? This is precisely what the digit span measures, on the idea that each digit (or each chunk is placed in its own slot.
We've suggested, though, that the modern conception of working memory is more dynamic-so that working memory is best thought of as a status (something like "currently activated") rather than a place. (See, e.g., Christophel, Klink, Spitzer, Roelfsema, & Haynes, 2017; also Figure 6.6.) On this basis, perhaps we need to rethink how we measure this memory's capacity-seeking a measure that reflects working memory's active operation.
Modern researchers therefore measure this memory's capacity in terms of operation span, a measure of working memory when it is "working." There are several ways to measure operation span, with the types differing in what "operation" they use (e.g., Bleckley, Foster, & Engle, 2015; Chow & Conway, 2015). One type is reading span. To measure this span, a research participant might be with asked to read aloud a series of sentences, like these:
Due to his gross inadequacies, his position as director was terminated abruptly It is possible, of course, that life did not arise on Earth at all. Immediately after reading the sentences, the participant is asked to recall each sentence's final word-in this case, "abruptly" and "all." If she can do this with these two sentences, she's asked to do the same task with a group of three sentences, and then with four, and so on, until the limit on her performance is located. This limit defines the person's working-memory capacity, or WMC.(However there are other ways to measure operation span-see Figure 6.7.)
Let's think about what this task involves: storing materials (the ending words) for later use in the recall test, while simultaneously working with other materials (the full sentences). This juggling of processes, as the participant moves from one part of the task to the next, is exactly what working memory must do in day-to-day life. Therefore, performance in this test is likely to reflect the efficiency with which working memory will operate in more natural settings.
Is operation span a valid measure-that is, does it measure what it's supposed to? Our hypothesis higher operation span has a larger working memory. If this is right, then use of this memory is that someone with a someone with a higher span should have an advantage in tasks that make heavy Which tasks are these? They're tasks that require you to keep multiple ideas active at the same time, prediction: People so that you can coordinate and integrate various bits of information. So here's our with a larger span (i.e., a greater WMC) should do better in tasks that require the coordination of different pieces of information. Consistent with this claim, people with a greater WMC do have an advantage in many settings-in tests of reasoning, assessments of reading comprehension, standardized academic tests (including the verbal SAT), tasks that require multitasking, and more. (See, e.g., Ackerman, Beier, & Boyle, 2002; Butler, Arrington, & Weywadt, 2011; Daneman & Hannon, 2001; Engle & Kane, 2004; Gathercole & Pickering, 2000; Gray, Chabris, & Braver, 2003; Redick et al., 2016; Salthouse & Pink, 2008. For some complications, see Chow & Conway, 2015; Harrison, Shipstead, & Engle, 2015; Kanerva & Kalakoski, 2016; Mella, Fagot, Lecert, & de Ribaupierre, 2015.)
These results convey several messages. First, the correlations between WMC and performance provide indications about when it's helpful to have a larger working memory, which in turn helps us understand when and how working memory is used. Second, the link between WMC and measures of intellectual performance provide an intriguing hint about what we're measuring with tests (like the SAT) that seek to measure "intelligence." We'll return to this issue in Chapter 13 when we discuss the nature of intelligence. Third, it's important that the various correlations are observed with the more active measure of working memory (operation span) but not with the more traditional (and more static) span measure. This point confirms the advantage of the more dynamic measures and strengthens the idea that we're now thinking about working memory in the right way: not as a passive storage box, but instead as a highly active information processor. The Rehearsal Loop Working memory's active nature is also evident in another way: in the actual structure of this memory. The key here is that working memory is not a single entity but is instead, a system built of several components (Baddeley, 1986, 1992, 2012; Baddeley & Hitch, 1974; also see Logie & Cowan, 2015). At the center of the working-memory system is a set of processes we discussed in Chapter 5: the executive control processes that govern the selection and sequence of thoughts. In discussions of working memory, these processes have been playfully called the "central executive" as if there tiny agent embedded in your mind, running your mental operations. Of course, there is no were a agent, and the central executive is just a name we give to the set of mechanisms that do run the show.
The central executive is needed for the "work" in working memory; if you have to plan a response or make a decision, these steps require the executive. But in many settings, you need less than this from working memory. Specifically, there are because you're analyzing them right you don't need the executive. Instead, you can rely on the executive's "helpers," leaving the executive settings in which you need to keep ideas in mind, not now but because you're likely to need them soon. In this case free to work on more difficult matters. Let's focus on one of working memory's most important helpers, the articulatory rehearsal loop. To see how the loop functions, try reading the next few sentences while holding on to these numbers: "1, 4, 6, 3" Got them? Now read on. You're probably repeating the numbers over and over to yourself, rehearsing them with your inner voice. But this takes very little effort, so you can continue reading while doing this rehearsal. Nonetheless, the moment you need to recall the numbers (what were they?), they're available to you.
In this setting, the four numbers were maintained by working memory's rehearsal loop, and with the numbers thus out of the way, the central executive could focus on the processes needed for reading. That is the advantage of this system: With mere storage handled by the helpers, the executive is available for other, more demanding tasks.
To describe this sequence of events, researchers would say that you used subvocalization-silent speech-to launch the rehearsal loop. This production by the "inner voice" produced representation of the target numbers in the phonological buffer, a passive storage system used for holding a representation (essentially an "internal echo") of recently heard or self-produced sounds. In other words, you created an auditory image in the "inner ear." This image started to fade away after a second or two, but you then subvocalized the numbers once again to create a new image, sustaining the material in this buffer. (For a glimpse of the biological basis for the "inner voice" and "inner ear" see Figure 6.8.)
Many lines of evidence confirm this proposal. For example, when people are storing information in working memory, they often make "sound-alike" errors: Having heard "F" they'll report back "S." When trying to remember the name "Tina," they'll slip and recall "Deena" The problem isn't that people mis-hear the inputs at the start; similar sound-alike confusions emerge if the inputs are presented visually. So, having seen "F," people are likely situation to report back the similar-looking "E." to report back "S"; they aren't likely in this
What produces this pattern? The cause lies in the fact that for this task people are relying on the rehearsal loop, which involves a mechanism (the "inner ear") that stores the memory items as (internal representations of) sounds. It's no surprise, therefore, that errors, when they occur, are shaped by this mode of storage.
As a test of this claim, we can ask people to take the span test while simultaneously saying "Tah- Tah-Tah" over and over, out loud. This concurrent articulation task obviously requires the mechanisms for speech production. Therefore, those mechanisms are not available for other use including subvocalization. (If you're directing your lips and tongue to produce the "Tah-Tah-Tah" sequence, you can't at the same time direct them to produce the sequence needed for the subvocalized materials.) How does this constraint matter? First, note that our original span test measured the combined capacities of the central executive and the loop. That is, when people take a standard span test (as opposed to the more modern measure of operation span), they store some of the to-be-remembered items in the loop and other items via the central executive. (This is a poor use of the executive underutilizing its talents, but that's okay here because the standard span task doesn't require anything beyond mere storage.)
With concurrent articulation, though, the loop isn't available for use, so we're now measuring the capacity of working memory without the rehearsal loop. We should predict, therefore, that concurrent articulation, even though it's extremely easy, should cut memory span drastically. This prediction turns out to be correct. Span is ordinarily about seven items; with concurrent articulation, it drops by roughly a third-to four or five items (Chincotta & Underwood, 1997; see Figure 6.9).
Second, with visually presented items, concurrent articulation should eliminate the sound-alike errors. Repeatedly saying "Tah-Tah-Tah" blocks use of the articulatory loop, and it's in this loop, we've proposed, that the sound-alike errors arise. This prediction, too, is correct: With concurrent articulation and visual presentation of the items, sound-alike errors are largely eliminated.
The Working-Memory System
As we have mentioned, your working memory contains the thoughts and ideas you're working right now, and often this means you're trying to same time. That can cause difficulties, because working memory only has a small capacity. That's on keep multiple ideas in working memory alll at the important, because they substantially increase working why working memory's helpers are so memory's capacity. Against this backdrop, it's not surprising that the working-memory system relies on other helpers in addition to the rehearsal loop. For example, the system also relies on the visuospatial buffer, used for storing visual materials such as mental images, in much the same way that the rehearsal loop speech-based materials. (We'll have more to say about mental images in Chapter 11.) Baddeley working-memory system) has also proposed another stores (the researcher who launched the idea of a component of the system: the episodic buffer. This component is proposed as a mechanism that helps the executive organize information into a chronological sequence-so that, for example, you can keep track of a story you've just heard or a film clip you've just seen (e.g., Baddeley, 2000, 2012; Baddeley & Wilson, 2002; Baddeley, Eysenck, & Anderson, 2009). The role of this component is evident in patients with profound amnesia who seem unable to put new information into long-term storage, but who still can recall the flow of narrative in a story they just heard. This short-term recall, it seems, relies on the episodic buffer-an aspect of working memory that's unaffected by the amnesia. In addition, other helpers have been deaf since birth and communicate via sign language. We wouldn't expect these individuals can be documented in some groups of people. Consider people who rely on an "inner voice" and an "inner ear"-and they don't. People who have been deaf since birth to rely on a different helper for working memory: They use an "inner hand" (and covert sign language) rather than an "inner voice" (and covert speech). As a result, they disrupted if they're asked to wiggle their fingers during a memory task (similar to a hearing person saying "Tah-Tah-Tah"), and are wiggle their fingers during they also tend to make "same hand-shape" errors in working memory (similar to the sound-alike errors made by the hearing population). The Central Executive What can we say about the main player within the working-memory system-the central executive? In our discussion of attention (in Chapter 5), we argued that executive control processes are needed to govern the sequence of thoughts and actions; these processes enable you to set goals, make plans for reaching those goals, and select the steps needed for implementing those plans. Executive control also helps whenever you want to rise above habit or routine, in order to "tune" your words or deeds to the current circumstances.
For purposes of the current chapter, though, let's emphasize that the same processes control the selection of ideas that are active at any moment in time. And, of course, these active ideas (again, by definition) constitute the contents of working memory. It's inevitable, then, that we would link executive control with this type of memory. With all these points in view, we're ready to move on. We've now updated the modal model (Figure 6.1) in important ways, and in particular we've abandoned the notion of a relatively passive short-term memory serving largely as storage container. We've shifted to a dynamic conception of working memory, with the proposal that this term is merely the name for an activities-especially the complex activities of the central executive together with its various helpers.
But let's also emphasize that in this modern conception, just as in the modal model, working memory is quite fragile. Each shift in attention brings new information into working memory, and the newly arriving material displaces earlier items. Storage in this memory, therefore, is temporary organized set of Obviously, then, we also need some sort of enduring memory storage, so that we can remember things that happened an hour, or a day, or even years ago. Let's turn, therefore, to the functioning of long-term memory. e. Demonstration 6.2: Chunking The text mentions the benefits of chunking, and these benefits are easy to demonstrate. First, let's measure your memory span in the normal way: Cover the list of letters below with your hand or a piece of paper. Now, slide your hand or paper down, to reveal the first row of letters. Read the row silently, pausing briefly after you read each letter. Then, close your eyes, and repeat the row aloud. Open your eyes. Did you get it right? If so, do the same with the next row, and keep going until you hit a row that's too long-that is, a row for which you make errors. Count the items in that row. This count is your digit span.
Now, we'll do the exercise again, but this time, with rows containing letter pairs, not letters. Using the same procedure, at what row do you start to make errors?
EL ZA IN
ET LO JA RE
CA OM DO IG FU
AT YE OR CA VI TA
EB ET PI NU ES RA SU
RI NA FO ET HI ER WU AG
UR KA TE PO AG UF WO SA KI
SO HU JA IT WO FU CE YO FI UT
It's likely that your span measured with single letters was 6 or 7, or perhaps 8. It's likely that your span measured with letter pairs was a tiny bit smaller, perhaps 5 or 6 pairs-but that means you're now remembering 10 or 12 letters. If we focus on the letter count, therefore, your memory span seems to have increased from the first test to the second. But that's the wrong way to think about this. Instead, your memory span is constant (or close to it). What's changing is how you use that span -that is, how many letters you cram into each "chunk.
Now, one more step: Read the next sentence to yourself, then close your eyes, and try repeating the sentence back.
The tyrant passed strict laws limiting the citizens' freedom. Could you do this? Were you able repeat the sentence? If so, notice that your memory now seems able to hold 51 letters. Again, if we focus on letter count, your memory span is growing at an astonishing speed! But, instead, let's count chunks. The phrase "The tyrant" is probably just one chunk, likewise "strict laws" and "the citizens" Therefore, this sentence really just contains six chunks-and so is easily within your memory span! e. Demonstration 6.3: The Articulatory Rehearsal Loop Chapter 6 introduces the notion of the articulatory rehearsal loop, one of the key "helpers" within the working-memory system. As the chapter describes, many lines of evidence document the existence of this loop, but one type of evidence is especially easy to demonstrate. The demonstration is mentioned in the chapter, but here is a more elaborate version.
Read these numbers and think about them for a moment, so that you'll be able to recall them in a few seconds: 8257. Now, while you're holding on to these numbers, read the following paragraph:
You should, right now, be rehearsing those numbers while you are reading this paragraph, so that you'll be able to recall them when you're done with the paragraph. You are probably storing the numbers in your articulatory rehearsal loop, saying the numbers over and over to yourself. Using the loop in this way requires little effort or attention, leaving the central executive free to work on the concurrent task of reading these sentences-identifying the words, assembling them into phrases, and figuring out what the phrases mean. As a result, with the loop holding the numbers and the executive doing the reading, there is no conflict and no problem. Therefore, this combination is relatively easy.
Now, what were those numbers? Most people can recall them with no problem, for the reasons just described. They read-and understood-the passage, and holding on to the numbers caused no difficulty at all. Did you understand the passage? Can you summarize it, briefly, in your own words?
Next, try a variation: Again, you will place four numbers in memory, but then you'll immediately start saying "Tah-Tah-Tah" over and over out loud, while reading a passage. Ready? The numbers are: 3 814. Start saying "Tah-Tah-Tah" and read on.
Again, you should be rehearsing the numbers as you read, and also repeating "Tah-Tah-Tah" over and over out loud. The repetitions of "Tah-Tah-Tah" demand little thought, but they do require the neural circuits and the muscles that are needed for speaking, and with these resources tied up in this fashion, they're not available for use in the rehearsal loop. As a result, you don't have the option of storing the four numbers in the loop. That means you need to find some other means of remembering the numbers, and that's likely to involve the central executive. As a result, the executive needs to do two things at once-hold on to the numbers, and read the passage.
Now, what were those numbers? Many people in this situation find they've forgotten the numbers. Others can recall the numbers but find this version of the task (in which the executive couldn't rely on the rehearsal loop) much harder, and they may report that they actually found themselves skimming the passage, not reading it. Again, can you summarize the paragraph you just read? Glance back over the paragraph to see if your summary is complete: Did you miss something? You may have, because many people report that in this situation their attention hops back and forth, so that they read a little, think about the numbers, read some more, think about the numbers again, and so on-an experience they didn't have without the "Tah-Tah-Tah."
Finally, we need one more condition: Did the "Tah-Tah-Tah" disrupt your performance because (as proposed) it occupied your rehearsal loop? Or was this task simply a distraction, disrupting your performance because saying "Tah-Tah-Tah" over and over was irritating, or perhaps embarrassing? To find out, let's try one more task: Close your fist, but leave your thumb sticking out, and position your hand so that you're making the conventional "thumbs doWn" signal. With your hand in this shape, tap your thumb, over and over, on the top of your head. Keep doing this while reading the following passage. Once again, though, hold these numbers in your memory as you read: 7 2 4 5.
In this condition, you're again producing a rhythmic activity as you read, although it's tapping rather than repeating a syllable. If the problem in the previous condition was distraction, you should be distracted in the same way here. And you probably look ridiculous tapping your head in this fashion. If the problem in the previous condition was embarrassment, you should again be embarrassed here. On either of these grounds, this condition should be just as hard as the previous one. But if the problem in the previous condition depended instead on the repeated syllables blocking you from using your articulatory loop, that won't be a problem here, and this condition should be easier than the previous one.
What were the numbers? This condition probably was easy-alllowing us to reject the idea that the problem lies in distraction or embarrassment. Instead, use of the articulatory loop really is the key!
Demonstration adapted from Baddeley, A. (1986). Working memory. Oxford, England: Clarendon Press. e. Demonstration 6.4: Sound-Based Coding The chapter mentions that people often make sound-based errors when holding information in working memory. This is because working-memory storage relies in part on an auditory buffer-the so-called inner ear. The inner ear, in turn, relies on mechanisms ordinarily used for hearing, mechanisms that are involved when you're listening to actual, out-in-the-world sounds. The use of these mechanisms essentially guarantees that things that sound alike in actual hearing will also sound alike in the inner ear-and this produces the confusions that we see in our data, with people remembering that they saw an "F" (and thus the sound "eff"), for example, when they really saw an "S" ("ess").
This proposal about the inner ear also has other implications, and we can use those implications as further tests of the proposal. For example, if sound-alike items are confusable with each other in memory, then these items may actually be harder to remember, compared to items that don't sound alike. Is this the case? Read the list of letters below out loud, and then cover the list with your hand. Think about the list for 15 seconds or so, and then write it down. How many did you get right?
Here's the list of letters:
E C V T G D B
Now, do the same with this list-read it aloud quickly, and then cover it. Think about it for 15 seconds, and then write it down.
F R J A L O Q
Again, how many did you get right?
It's possible that this demonstration won't work for you-because it's possible you'll recall both equally accurate with the two lists, did you have to work harder for lists perfectly! But if you were one list than for the other? And if you made errors in your recall, which list produced more errors?
Most people find the first (sound-alike) list more difficult and are more likely to make errors with that list than with the second one. This is just what we'd expect if working memory relies on some sort of sound-based code. Entering Long-Term Storage: The Need for Engagement We've already seen an important clue regarding how information gets established in long-term storage: In discussing the primacy effect, we suggested that the more an item is rehearsed, the more likely you are to remember that item later. To pursue this point, though, we need to ask what exactly rehearsal is and how it might work to promote memory. Two Types of Rehearsal The term "rehearsal" doesn't mean much beyond "thinking about." In other words, when a research participant rehears es an item on a memory list, she's simply thinking about that item-perhaps once, perhaps mechanically, or perhaps with close attention to what the item perhaps means. Therefore, there's considerable variety within the activities that count as rehearsal, and over and over psychologists find it useful to sort this variety into two broad types.
As one option, people engage in can maintenance rehearsal, in which they simply items to-be-remembered the focus on themselves, with little thought about what the items mean or how they relate to one another. This is a rote, mechanical process, recycling items in working memory by repeating them over and over. In contrast, relational, or elaborative, rehearsal involves thinking about what the to-be-remembered items mean and how they're related to one another and to other things you already know.
Relational rehearsal is vastly superior to establishing information in memory. In fact, in many settings maintenance rehearsal for maintenance rehearsal provides long-term no benefit at all. As an informal demonstration of this point, consider the following experience (although, for a formal demonstration of this Craik & Watkins, 1973). You're point, see watching your favorite reality show on TV. The announcer savs. "To vote for Contestant # 4, text 4 to 21523 from your mobile phone!" You reach into your pocket for your phone but realize you left it in the other room. So you recite the number to yourself while scurrying for your phone, but then, just before you dial, a text message. You you see that you've got pause, read the message, and then you're ready you don't have a clue what the to dial, but. .. number was. What went wrong? You certainly heard the number, and you rehearsed it a couple of times while moving to grab your phone. But despite these rehearsals, the brief interruption from reading the text message seems to have erased the number from your memory. However, this isn't ultra-rapid forgetting. Instead, you never established the number in memory in the first place, because in this setting you relied only on maintenance rehearsal. That kept the number in your thoughts while you were moving across the room, but it did nothing to establish the number in long- term storage. And when you try to dial the number after reading the text message, it's long-term storage that you need.
The idea, then, is that if you think about something only in a mindless and mechanical way, the item won't be established in your long-term memory. Similarly, long-lasting memories aren't created simply by repeated exposures to the items to be remembered. If you encounter an item over and over but, at each encounter, barely think about it (or think about it only in a mechanical way), then this too, won't produce a long-term memory. As a demonstration, consider the ordinary penny. Adults in the United States have probably seen pennies tens of thousands of times. Adults in other countries have seen their own coins just as often. If sheer exposure is what counts for memory, people should remember perfectly what these coins look like.
But, of course, most people have little reason to pay attention to the penny. Pennies are a different color from the other coins, so they can be identified at a And, if it's scrutiny that matters for memory-or, more broadly, if we remember what we pay attention glance without further scrutiny. to and think about-then memory for the coin should be quite poor. The evidence on this point is clear: People's memory for the penny is remarkably bad. For example, most people know that Lincoln's head is on the "heads" side, but which way is he facing? Is it his right cheek that's visible or his left? What other markings are on the coin? Most people do very badly with these questions; their answers to the "Which way is he facing?" question are close to random (Nickerson & Adams, 1979). And performance is similar for people in other countries remembering their own coins. (Also see Bekerian &Baddeley, 1980; Rinck, 1999, for a much more consequential example.)
As a related example, consider the logo that identifies Apple products-the iPhone, the iPad, or one of the Apple computers. Odds are good that you've seen this logo hundreds and perhaps thousands of time, but you've probably had no reason to pay attention to its appearance. The prediction, then, is that your memory for the logo will be quite poor-and this prediction is correct. In one study, only 1 of 85 participants was able to draw the logo correctly-with the bite on the proper side, the stem tilted the right way, and the dimple properly placed in the logo's bottom (Blake, Nazarian, & Castel, 2015; see Figure 6.10). And-surprisingly-people who use an Apple computer (and therefore see the logo every time they turn on the machine) perform at a level not much better than people who use a PC.
The Need for Active Encoding Apparently, it takes some work to get information into long-term memory. Merely having an item in front of your eyes isn't enough-even if the item is there over and over and over. Likewise, repeatedly thinking about an item doesn't, by itself, establish a memory. That's evident in the fact that maintenance rehearsal seems ineffective at promoting memory.
Further support for these claims comes from studies of brain activity during learning. In several procedures, researchers have used fMRI recording to keep track of the moment-by-moment brain activity in participants who were studying a list of words (Brewer, Zhao, Desmond, Glover, & Gabrieli, 1998; Wagner, Koutstaal, & Schacter, 1999; Wagner et al., 1998; also see Levy, Kuhl, & Wagner, 2010). Later, the participants were able to remember some of the words they had learned but not others, which allowed the investigators to return to their initial recordings and compare brain activity during the learning process for words that were later remembered and words that were later forgotten. Figure 6.11 shows the results, with a clear difference, during the initial encoding between these two types of words. Greater levels of brain activity (especially in the hippocampus and regions of the prefrontal cortex) were reliably associated with greater probabilities of retention later on.
These FMRI results are telling us, once again, that learning is not a passive process. Instead, activity is needed to lodge information into long-term memory, and, apparently, higher levels of this activity lead to better memory. But this raises some new questions: What is this activity? What does it accomplish? And if-as it seems-maintenance rehearsal is a poor way to memorize, what type of rehearsal is more effective? Incidental Learning, Intentional Learning, and Depth of Processing Consider a student taking a course in college. The student knows that her memory for the course materials will be tested later (e.g., in the final exam). And presumably she'll take various steps to help herself remember: She may read through her notes again and again; she may discuss the material with friends; she may try outlining the material. Will these various techniques work-so that she'll have a complete and accurate memory when the exam takes place? And notice that the student is taking these steps in the context of wanting to memorize; she wants to do well on the exam! How does this motivation influence performance? In other words, how does the intention to memorize influence how or how well material is learned? In an early experiment, participants in one condition heard a list of 24 words; their task was to remember as many as they could. This is intentional learning-learning that is deliberate, with an expectation that memory will be tested later. Other groups of participants heard the same 24 words but had no idea that their memories would be tested. This allows us to examine the impact of incidental learning- that is, learning in the absence of any intention to learn. One of the incidental- learning groups was asked simply, for each word, whether the word contained the letter e. A different incidental-learning group was asked to look at each word and to report how many letters it contained. Another group was asked to consider each word and to rate how pleasant it seemed.
Later, all the participants were tested-and asked to recall as many of the words as they could. (The test was as expected for the intentional-learning group, but it was a surprise for the other groups.) The results are shown in Figure 6.12A (Hyde & Jenkins, 1969). Performance was relatively poor for the "Find the e" and "Count the letters" groups but appreciably better for the "How pleasant?" group. What's striking, though, is that the "How pleasant?" group, with no intention to memorize, performed just as well as the intentional-learning ("Learn these!") group. The suggestion, then, is that the intention to learn doesn't add very much; memory can be just as good without this intention, provided that you approach the materials in the right way.
This broad pattern has been reproduced in many other experiments (to name just a few: Bobrow & Bower, 1969; Craik & Lockhart, 1972; Hyde & Jenkins, 1973; Jacoby, 1978; Lockhart, Craik,& Jacoby, 1976; Parkin, 1984; Slamecka & Graf, 1978). As one example, consider a study by Craik and Tulving (1975). Their participants were led to do incidental learning (i.e., they didn't know their memories would be tested). For some of the words shown, the participants did shallow processing-that is, they engaged the material in a superficial way. Specifically, they had to say whether the word was printed in CAPITAL letters or not. (Other examples of shallow processing would be decisions about whether the words are printed in red or in green, high or low on the screen, etc.) For other words, the participants had to do a moderate level of processing: They had to judge whether each word shown rhymed with a particular cue word. Finally, for other words, participants had to do deep processing. This is processing that requires some thought about what the words mean; specifically, Craik and Tulving asked whether each word shown would fit into a particular sentence.
The results are shown in Figure 6.12B. Plainly, there is a huge effect of level of processing, with deeper processing (i.e., more attention to meaning) leading to better memory. In addition, Craik and Tulving (and many other researchers) have confirmed the Hyde and Jenkins finding that the intention to learn adds little. That is, memory performance is roughly the same in conditions in which participants do shallow processing with an intention to memorize, and in conditions in which they do shallow processing without this intention. Likewise, the outcome is the same whether people do deep processing with the intention to memorize or without. In study after study, what matters is how people approach the material they're seeing or hearing. It's that approach-that manner of engagement-that determines whether memory will be excellent or poor later on. The intention to learn seems, by itself, not to matter. e. Demonstration 6.5: Remembering Things You Hadn't Noticed As you move around in the world, you pass by many of the same sights-the same buildings, the same furniture-over and over. But usually you have no reason to take note of these things, and so, despite the repeated exposures, you may have little or no memory for these often-passed places or objects. The chapter mentions a couple of examples, but let's put this idea to the test.
To promote public safety, in case of a fire it will be immensely valuable if you can fire extinguisher and put the fire out before it has a chance to quickly grab spread. With this idea in mind, many public buildings have fire extinguishers positioned throughout the building so that this safety equipment will be near at hand at the moment of need. Take a minute. Can you list five or six places where you pass a fire extinguisher each day? Then, over the next few days, do your best to be alert to where the fire extinguishers really are. How many didn't make it onto your list, probably because you never noticed them in the first place?
In the same fashion, automatic external defibrillators (AED devices) are positioned throughout public buildings so that, in case of need, you'll be able to grab one quickly and use it to save someone who has suffered a cardiac arrest. Here, timing is crucial, and it's important that you find and use the defibrillator quickly. Do you know where the defibrillators are in the buildings you often spend time in? Again, take a minute. Can you list the locations? Each one is clearly marked with an AED sign (with a picture of a lightning bolt on a heart).
Once more, try over the next days to notice where the AED signs and devices are. How many of them didn't you remember because you had no reason to notice them?
One more test case: Most readers of this book are college or university students; most are taking several courses, and most of these courses have at least one textbook. Take a moment and describe what's on the cover of each of your textbooks. Is there artwork of any sort? If so, is it an abstract design or a representation of some sort? If it's a representation, is it a photograph or some other form of visual art? What text is shown on the book's cover? The odds are good that you'll do rather poorly on at least one of these exercises-remembering the fire extinguisher locations, or the defibrillators, or the book covers. But don't despair. This isn't an indication that you have a terrible memory. It's simply a confirmation that you, like most people, don't remember things that you had no reason to notice. However, bear in mind that you want to know where the fire extinguishers and defibrillators are; someone's life could depend on you having these memories! So take a moment to notice these objects as you move around!
For more on this, see Hargis, M. B., McGillivray, S., & Castel, A. D. (2017). Memory for textbook covers: When and why we remember a book by its cover. Applied Cognitive Psychology, 32, 39-46 e. Demonstration 6.6: The Effects of Unattended Exposure How does information get entered into long-term storage? One idea is that mere exposure is enough -so that if an object is in front of your eyes over and over and over, you'll learn exactly what the object looks like. However, this claim is false. Memories are created through a process of active engagement with materials; mere exposure is insufficient.
This point is easy to demonstrate, but for the demonstration you'll need to ask one or two friends a few questions. (You can't just test yourself, because the textbook chapter gives away the answer, and so your memory is already altered by reading the chapter.)
Approach a friend who hasn't, as far as you know, read the Cognition text, and ask your friend these questions:
1. Which president is shown on the Lincoln penny? (It will be troubling if your friend gets this wrong!)
2. What portion of the president's anatomy is shown on the "heads" side of the coin? (It will be even more troubling if your friend gets this one wrong.)
3. Is the head facing forward, or is it visible only in profile? (This question, too, will likely be very easy.)
4. If the head is in profile, is it facing to the viewer's right, so that you can see the right ear and right cheek, or is it facing to the viewer's left, so that you can see the left ear and left cheek?
(For Canadian or British readers, you can ask the same questions about your nation's penny- assuming that pennies are still around when you run the test. Of course, your penny shows, on the "heads" side, the profile of the monarch who was reigning when the penny was issued. But the memory questions-and the likely outcome-are otherwise the same.)
Odds are good that half of the people you ask will say "facing right" and half will say "facing left." In other words, people trying to remember this fact about the penny are no more accurate than they would be if they answered at random.
Now, a few more questions. Is your friend wearing a watch? If so, reach out and put your hand on his or her wrist, so that you hide the watch from view. Now ask your friend:
5. Does your watch have all the numbers, from 1 through 12, on it? Or is it missing some of the numbers? If so, which numbers does it have?
6. What style of numbers is used? Ordinary numerals or Roman numerals?
7. What style of print is used for the numbers? An italic? A "normal" vertical font? A font that's elaborate in some way, or one that's relatively plain? How accurate are your friends? Chances are excellent that many of your friends will answer these questions incorrectly-even though they've probably looked at their watches over and over and over during the years in which they've owned the watch.
By the way, this demonstration may yield different results if you test women than if you test men. Women are often encouraged to think of wristwatches as jewelry and so are more likely to think about what the watch looks like. Men are encouraged to take a more pragmatic view of their watches -and so they often regard them as timepieces, not jewelry. As a result, women are more likely than men to notice, and think about, the watch's appearance, and thus to remember exactly what their watches look like!
Even with this last complication, the explanation for all of these findings is straightforward. Information is not recorded in your memory simply because the information has been in front of your eyes at some point. Instead, information is recorded into memory only if you pay attention to that information and think about it in some way. People have seen pennies thousands of times, but they've not had any reason to think about Lincoln's position. Likewise, they've looked at their watches many, many times but probably haven't had a reason to think about the details of the numbers. As a result, and despite an enormous number of "learning opportunities" these unattended details are simply not recorded into memory. Finally, one last point. You own many books, and those books are often in your view-for example, when they're sitting on your desk. But you usually have no reason to pay attention to a book's cover, and so, by the logic we're considering here, you likely will have little (or no) memory for the cover This point was confirmed in a 2017 study: Students in a college course were asked what image was shown on the cover of the course's textbook. Students were given four options for what the image might be, and so, if they were guessing randomly, they would have been right 25% of the time. In fact, performance was better than this-but not by much. Only 39% of the students chose the right answer. (Said differently, almost two-thirds of the students chose the wrong answer!) And now one ironic point: The textbook used as the "test stimulus" in this study was the 7th edition of the textbook you're using-that is, Reisberg's Cognition text. This leads to an obvious question: Do you remember what the cover of the book looks like?
Demonstration adapted from Nickerson, R., & Adams, M. (1979). Long-term memory for a common object. Cognitive Psychology, 11, 287-307. Also see Hargis, M. B., McGillivray, S., & Castel, A. D. (2017). Memory for textbook covers: When and why we remember a book by its cover. Applied Cognitive Psychology, 32, 39-46. e. Demonstration 6.7: Depth of Processing
Many experiments show that deep processing-paying attention to an input's meaning, or its implications-helps memory. In contrast, materials that receive only shallow processing tend not to be well remembered. This contrast is reliable and powerful, and also easily demonstrated.
The following is a list of questions accompanied by single words. Some of the questions concern categories. For example, the question might ask: "Is a type of vehicle? Truck" For this question, the answer would be yes.
Some of the questions involve rhyme. For example, the question might ask: "Rhymes with chair? Horse." Here, the answer is no.
Still other questions concern spelling patterns-in particular, the number of vowels in the word. For example, if asked "Has three vowels? Chair," the answer would again be no.
Go through the list of questions at a comfortable speed, and say "yes" or "no" aloud in response to each question.
Rhymes with angle? Speech Rhymes with coffee? Chapel
Is a type of silverware? Brush Has one vowel? Sonnet
Has two vowels? Cheek Rhymes with rich? Witch
Is a thing found in a garden? Flame Is a type of insect? Roach
Has two vowels? Flour Has one vowel? Twig
Is a rigid object? Honey Rhymes with bin? Grin
Rhymes with elder? Knife Rhymes with fill? Drill
Has three vowels? Sheep Is a human sound? Moan
Rhymes with merit? Copper Has two vowels? Claw
Rhymes with shove? Glove Is a type of entertainer? Singer
Is a boundary dispute? Monk Rhymes with candy? Bear
Rhymes with star? Jar Has four vowels? Cherry
Has two vowels? Cart Is a type of plant? Tree
Is a container for liquid? Clove Rhymes with pearl? Earl
Is something sold on
street corners? Robber Has two vowels? Pool
Is a part of a ship? Mast Is a part of an airplane? Week
Has four vowels? Fiddle Has one vowel? Pail
This list contained 12 of each type of question-12 rhyme questions, 12 spelling questions, and 12 questions concerned with meaning. Was this task easy or hard? Most people have no trouble at all with this task, and they give correct answers to every one of the questions.
Each of these questions had a word provided with it, and you needed that word to answer the question. How many of these "answer words" do you remember? Without looking back at the list, write down as many of the answer words as you can remember on a piece of paper.
Now, go back and check your answers. First, put a checkmark alongside the word if it did in fact occur in the earlier list-so that your recall is correct.
Second, for each of the words you remembered, do the following:
Put an S next to the word you recalled if that word appeared in a spelling question (i., asking about number of vowels).
Put an R next to the word you recalled if that word appeared in one of the rhyme questions.
Put an M next to the word you recalled if that word appeared in one of the questions concerned with meaning. How many S words did you recall? How many R words? How many M words?
It's close to certain that you remembered relatively few S words, more of the R words, and even more of the M words. In fact, you may have recalled most of the 12 M words. Is this the pattern of your recall? If so, then you just reproduced the standard level-of-processing effect, with deeper meaning) reliably producing better recall, for the reasons described in the processing (attention to textbook chapter.
Demonstration adapted from Craik, F., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 269-294. The Role of Meaning and Memory Connections
The message so far seems clear: If you want to remember the sentences you're reading in this text or the materials you're learning in the training sessions at your job, you should pay attention to what these materials mean. That is, you should try to do deep processing. And if you do deep processing it won't matter if you're trying hard to memorize the materials (intentional learning) or merely paying attention to the meaning because you find the material interesting, with no plan for memorizing (incidental learning)
But what lies behind these effects? Why does attention to meaning lead to good recall? Let's start with a broad proposal; we'll then fill in the evidence for this proposal. Connections Promote Retrieval
Perhaps surprisingly, the benefits of deep processing may not lie in the learning process itself Instead, deep processing may influence subsequent events. More precisely, attention to meaning may help you by facilitating retrieval of the memory later on. To understand this point, consider what happens whenever a library acquires a new book. On its way into the collection, the new book must be catalogued and shelved appropriately. These steps happen when the book arrives, but the cataloguing doesn't literally influence the arrival of the book into the building. The moment the book is delivered, it's physically in the library, catalogued or not, and the book doesn't become "more firmly" or "more strongly" in the library because of the cataloguing.
Even so, the cataloguing is crucial. If the book were merely tossed on a random shelf somewhere, with no entry in the catalogue, users might never be able to find it. Without a catalogue entry, users of the library might not even realize that the book was in the building. Notice, then, that cataloguing happens at the time of arrival, but the benefit of cataloguing isn't for the arrival itself. (If the librarians all went on strike, so that no books were being catalogued, books would continue to arrive, magazines would still be delivered, and so on. Again: The arrival doesn't depend on cataloguing.) Instead, the benefit of cataloguing is for events that happen after the book's arrival-cataloguing makes it possible (and maybe makes it easy) to find the book later on. The same is true for the vast library that is your memory (cf. Miller & Springer, 1973). The task of learning is not merely a matter of placing information into long-term storage. Learning also needs to appropriate indexing; it must pave a path to the newly acquired information, so that establish some this information can be retrieved at some future point. Thus, one of the main chores of memory acquisition is to lay the groundwork for memory retrieval.
But what is it that facilitates memory retrieval? There are, in fact, several ways to search through memory, but a great deal depends trigger another, and then that memory to trigger another, so that you're "led," connection by connection, to the sought-after information. In some cases, the connections link one of the items you're trying to remember to some of the other items; if so, finding the first will lead you to the others. In other settings, the connections might link some aspect of the context-of-learning to the on memory connections. Connections allow one memory to target information, so that when you think again about the context ("I recognize this room-this is where I was last week"), you'll be led to other ideas ("Oh, yeah, I read the funny story in this room"). In all cases, though, this triggering will happen only if the relevant connections are in place-and establishing those connections is a large part of what happens during learning.
This line of reasoning has many implications, and we can use those implications as a basis for testing whether this proposal is correct. But right at the start, it should be clear why, according to this account, deep processing (i.e., attention to meaning) promotes memory. The key is that attention to meaning involves thinking about relationships: "What words are related in meaning to the word I'm now considering? What words have contrasting meaning? What is the relationship between the start of this story and the way the story turned out?" Points like these are likely to be prominent when you're thinking about what some word (or sentence or event) means, and these points will help you to find (or, perhaps, to create) connections among your various ideas. It's these connections, we're proposing, that really matter for memory. Elaborate Encoding Promotes Retrieval
Notice, though, that on this account, attention to meaning is not the only way to improve memory. Other strategies should also be helpful, provided that they help you to establish memory connections. As an example, consider another classic study by Craik and Tulving (1975). Participants were shown a word and then shown a sentence with one word left out. Their task was to decide whether the word fit into the sentence. For example, they might see the word "chicken" and then the sentence "She cooked _______.” The appropriate response would be yes, because the word does fit in this sentence. After a series of these trials, there was a surprise memory test, with participants asked to remember all the words they had seen. But there was an additional element in this experiment. Some of the sentences shown to participants were simple, while others were more elaborate. For example, a more complex sentence might be: "The great bird swooped down and carried off the struggling ________." Sentences like this one produced a large memory benefit-words were much more likely to be remembered if they appeared with these rich, elaborate sentences than if they had appeared in the simpler sentences (see Figure 6.13).
Apparently, then, deep and elaborate processing leads to better recall than deep processing on its own. Why? The answer hinges on memory connections. Maybe the "great bird swooped" sentence calls to mind a barnyard scene with the hawk carrying away a chicken. Or maybe it calls to mind thoughts about predator-prey relationships. One way or another, the richness of the sentence offers to mind, each of which can be the potential for many connections as it calls other thoughts connected to the target sentence. These connections, in turn, provide potential retrieval paths- paths that can, in effect, guide your thoughts toward the content to be remembered. All of this e fewer connections and so establish a seems less likely for the simpler sentences, which will evoke narrower set of retrieval paths. Consequently, words associated with these sentences are less likely to be recalled later on. Organizing and Memorizing Sometimes, we've said, memory connections link the to-be-remembered material to other information already in memory. In other cases, the connections link one aspect of the to-be- remembered material to another aspect of the same material. Such connection ensures that if any part of the material is recalled, then all will be recalled.
In all settings, though, the connections are important, and that leads us to ask how people about discovering (or creating) these go connections. More than 70 years ago, a psychologist named George Katona argued that the key lies in organization (Katona, 1940). Katona's argument was that the processes of organization and memorization are inseparable: You memorize well when you discover the order within the material. Conversely, if you find (or impose) an organization on the material, you will easily remember it. These suggestions are fully compatible with the conception were what organization developing here, since provides is memory connections. Mnemonics For thousands of years, people have longed for "better" memories and, guided by this desire, people in the ancient world devised various techniques to known as mnemonic strategies. In fact, many of improve memory-techniques the mnemonics still in use date back to ancient Greece. (It's therefore appropriate that these are named in honor of Mnemosyne, techniques the goddess of memory in Greek mythology.)
How do mnemonics work? In general, these strategies provide some way of organizing the to-be-remembered material. For example, one broad class of mnemonic, often used for memorizing sequences of words, links the first letters of the words into some meaningful structure. Thus, children rely on ROY G. BIV to memorize the sequence of colors in the rainbow (red, orange, yellow...), and they learn the lines in music's treble clef via "Every Good Boy Deserves Fudge" or ". .. Does Fine" (the lines indicate the musical notes E, G, B, D, and F). Biology students use a sentence like "King Philip Crossed the Ocean to Find Gold and Silver" (or: ".. . to Find Good Spaghetti") to memorize the sequence of taxonomic categories: kingdom, phylum, class, order, family, genus, and species.
Other mnemonics involve the use of mental imagery, relying on "mental pictures" to link the to- be-remembered items to one another. (We'll have much more to say about "mental pictures" in Chapter 11.) For example, imagine a student trying to memorize a list of word pairs. For the pair eagle-train, the student might imagine the eagle winging back to its nest with a locomotive in its beak. Classic research evidence indicates that images like this can be enormously helpful. It's important, though, that the images show the objects in some sort of relationship again highlighting the role of organization. train sitting side-by-side (Wollen, Weber, & Lowry, 1972; for another example of a mnemonic, see or interaction- It doesn't help just to form a picture of an eagle and a Figure 6.14).
A different type of mnemonic provides an external "skeleton" for the to-be-remembered materials, and mental imagery can be useful here, too. Imagine that you want to remember a list of largely unrelated items-perhaps the entries on your shopping list, or a list of questions you want to ask your adviser. For this purpose, you might rely on one of the so-called peg-word systems. These systems begin with a well-organized structure, such as this one:
One is a bun.
Two is a shoe.
Three is a tree.
Four is a door.
Five is a hive.
Six are sticks.
Seven is heaven.
Eight is a gate.
Nine is a line.
Ten is a hen. This rhyme provides ten "peg words" ("bun," "shoe," etc.), and in memorizing something you can "hang" the materials to be remembered on these "pegs." Let's imagine that you want to remember the list of topics you need to discuss with your adviser. If you want to discuss your unhappiness with chemistry class, you might form an association between chemistry and the first peg, "bun." You might picture a hamburger bun floating in an Erlenmeyer flask. If you also want to discuss your plans for after graduation, you might form an association between some aspect of those plans and the next peg, "shoe." (You could think about how you plan to pay your way after college by selling shoes.) Then, when meeting with your adviser, all you have to do is think through that silly rhyme again. When you think of "one is a bun," it's highly likely that the image of the flask (and therefore of chemistry lab) will come to mind. With "two is a shoe," you'll be reminded of your job plans. And so on. Hundreds of variations on these techniques- the first-letter mnemonics, visualization strategies peg-word systems-are available. Some variations are taught in self-help books (you've probably seen the ads-"How to Improve Your Memory!"); some are taught as part of corporate management training. But all the variations use the same basic scheme. To remember a list with no apparent organization, you impose an organization crucially, these systems all work. They help you remember individual items, and they also help you remember those items in a specific sequence. Figure 6.15 shows some of the data from one early on it by using a tightly organized skeleton or scaffold. And study; many other studies confirm this pattern (e.g., Bower, 1970, 1972; Bower & Reitman, 1972 Christen & Bjork, 1976; Higbee, 1977; Roediger, 1980; Ross & Lawrence, 1968; Yates, 1966). All of this strengthens our central claim: Mnemonics work because they impose an organization on the materials you're trying to memorize. And, consistently and powerfully, organizing improves recall.
Given the power of mnemonics, students are well advised to use these strategies in their studies. In fact, for many topics there are online databases containing thousands of useful mnemonics- helping medical students to memorize symptom lists, chemistry students to memorize the periodic table, neuroscientists to remember the brain's anatomy, and more.
Bear in mind, though, that there's a downside to the use of mnemonics in educational settings. When using a mnemonic, you typically focus on just one aspect of the material you're trying to memorize-for example, just the first letter of the word to be remembered-and so you may cut short your effort toward understanding this material, and likewise your effort toward finding multiple connections between the material and other things you know. To put this point differently, mnemonic use involves a trade-off. If you focus on just one or two memory connections, you'll spend little time thinking about other possible connections, including those that might help you understand the material. This trade-off will be fine if you don't care very much about the meaning of the material. (Do you care why, in taxonomy, "order" is a subset of "class," rather than the other way around?) But the trade-off is troubling if you're trying to memorize material that is meaningful. In this case, you'd be better served by a memory strategy that seeks out multiple connections between the material you're trying to learn and things you already know. This effort toward multiple links will help you in two ways. First, it will foster your understanding of the material to be remembered, and so will lead to better, richer, deeper learning. Second, it will help you retrieve this information later. We've already suggested that memory connections serve as retrieval paths, and the more paths there are, the easier it will be to find the target material later.
For these reasons, mnemonic use may not be the best approach in many situations. Still, the fact remains that mnemonics are immensely useful in some settings (what were those rainbow colors?) , and this confirms our initial point: Organization promotes memory.
Understanding and Memorizing So far, we've said a lot about how people memorize simple stimulus materials-lists of randomly selected words, or colors that have to be learned in the right sequence. In our day-to-day lives, however, we typically want to remember more meaningful, more complicated, material. We want to remember the episodes experience, the details of rich scenes we've observed, or the many-step we arguments we've read in a book. Do the same memory principles apply to these cases?
The answer is clearly yes (although we'll have more to say about this issue in Chapter 8). In other or complex bodies of knowledge is enormously pictures, organize the material to be remembered. With these more words, your memory for events, or dependent complicated materials, though, we've suggested that your best bet for organization isn't some arbitrary skeleton like those used in mnemonics. Instead, the best organization of these complex on your being able to on understanding. That is, you remember best what you materials is generally dependent understand best. There are many ways to show that this is true. For example, we can give people paragraph to read and test their comprehension by asking questions about the material. Sometime later, we can test their memory. The results are clear: The better the participants' understanding of a sentence or a paragraph, if questioned immediately after viewing the material, the greater the likelihood that they will remember the material after a delay (for classic data on this topic, see Bransford, 1979)
Likewise, consider the material you're learning right now in the courses you're taking. Will you remember this material 5 years from now, or 10, or 20? The answer depends on how well you understand the material, and one measure of understanding is the grade you earn in a course. With full and rich understanding, you're likely to earn an A; with poor understanding, your grade is likely to be lower. This leads to a prediction: If understanding is (as we've proposed) important for memory, then the higher someone's grade in a course, the more likely that person is to remember the course contents, even years later. This is exactly what the data show, with A students remembering the material quite well, and C students remembering much less (Conway, Cohen, & Stanhope, 1992). The relationship between understanding and memory can also be demonstrated in another way by manipulating whether people understand the material or not. For example, in an early experiment by Bransford and Johnson (1972, p. 722), participants read this passage:
The procedure is actually quite simple. First you arrange items into different groups. Of course one on how much there is to do. If you have to go somewhere else due pile may be sufficient depending to lack of facilities that is the next step; otherwise you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run, this may not seem important but complications can easily arise. A mistake can be expensive as well. At first, the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then, one never can tell. After the procedure is completed one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually they will be used once more and the whole cycle will then have to be repeated. However, that is part of life.
You're probably puzzled by the passage, and so are most research participants. The story is easy to understand, though, if we give it a title: "Doing the Laundry" In the experiment, some participants were given the title before reading the passage; others were not. Participants in the first group easily understood the passage and were able to remember it after a delay. Participants in the second group, reading the same words, weren't confronting a meaningful passage and did poorly on the memory test. (For related data, see Bransford & Franks, 1971; Sulin & Dooling, 1974. For another example, see Figure 6.16.)
Similar effects can be documented with nonverbal materials. Consider the picture shown in Figure 6.17. At first it looks like a bunch of meaningless blotches; with some study, though, you may discover a familiar object. Wiseman and Neisser (1974) tested people's memory for this picture. far, their memory was good if they understood the picture-and Consistent with what we've seen bad otherwise. (Also see Bower, Karlin, & Dueck, 1975; Mandler & Ritchey, 1977; Rubin & Kontis, 1983.)
The Study of Memory Acquisition This chapter has largely been about memory acquisition. How do we acquire new memories? How is new information, new knowledge, established in long-term memory? In more pragmatic terms, what is the best, most effective way to learn? We now have answers to these questions, but our discussion has indicated that we need to place these questions into a broader context-with attention on the substantial contribution from the memorizer, and also a consideration of the interconnections among acquisition, retrieval, and storage. The Contribution of the Memorizer Over and over, we've seen that memory depends on connections among ideas, connections fostered by the steps you take in your effort toward organizing and understanding the materials you encounter. Hand in hand with this, it appears that memories are not established by sheer contact with the items you're hoping to remember. If you're merely exposed to the items without giving them any thought, then subsequent recall of those items will be poor. These points draw attention to the huge role played by the memorizer. If, for example, we wish to predict whether this or that event will be recalled, it isn't enough to know that someone was exposed to the event. Instead, we need to ask what the person was doing during the event. Did she only do maintenance rehearsal, or did she engage the material in some other way? If the latter, how did she think about the material? Did she pay attention to the appearance of the words or to their meaning? If she thought about meaning, was she able to understand the material? These considerations are crucial for predicting the success of memory.
The contribution of the memorizer is also evident in another way. We've argued that learning depends on making connections, but connections to what? If you want to connect the to-be- remembered material to other knowledge, to other memories, then you need to have that other knowledge-you need to have other (potentially relevant) memories that you can "hook" the new material on to. This point helps us understand why sports fans have an easy time learning new facts about sports, and why car mechanics can easily learn new facts about cars, and why memory experts easily memorize new information about memory. In each situation, the person enters the learning situation with a considerable advantage-a rich framework that the new materials can be woven into. an easy time learning new facts about But, conversely, if someone enters a learning situation with little relevant background, then there's no framework, nothing to connect to, and learning will be more difficult. Plainly, then, if we want to predict someone's success in memorizing, we need to consider what other knowledge the individual brings into the situation. The Links among Acquisition, Retrieval, and Storage These points lead us to another important theme. The emphasis in this chapter has been on memory acquisition, but we've now seen that claims about acquisition cannot be separated from claims about storage and retrieval. For example, why is memory acquisition improved by organization? We’ve suggested that organization provides retrieval paths, making the memories "findable" later on, and this is a claim about retrieval. Therefore, our claims about acquisition are intertwined with claims about retrieval.
Likewise, we just noted that your ability to learn new material depends, in part, on your having a framework of prior knowledge to which the new materials can be tied. In this way, claims about memory acquisition need to be coordinated with claims about the nature of what is already in storage.
These interactions among acquisition, knowledge, and retrieval are crucial for our theorizing. But the interactions also have important implications for learning, for forgetting, and for memory accuracy. The next two Chapters explore some of those implications. COGNITIVE PSYCHOLOGY AND EDUCATION how should i study? Throughout your life, you encounter information that you hope to remember later-whether you're a student taking courses or an employee in training for a new job. In these and many other settings what helpful lessons can you draw from memory research?
For a start, bear in mind that the intention to memorize, on its own, has no effect. Therefore, you don't need any special "memorizing steps." Instead, you should focus on making sure you understand the material, because if you do, you're likely to remember it.
As a specific strategy, it's useful to spend a moment after a class, or after you've done a reading assignment, to quiz yourself about what you've just learned. You might ask questions like these: "What are the new ideas here?"; "Do these new ideas fit with other things I know?": "Do I know what evidence or arguments support the claims here?" Answering questions like these will help you find meaningful connections within the material you're learning, and between this material and other information already in your memory. In the same spirit, it's often useful to rephrase material you encounter, putting it into your own words. Doing this will force you to think about what the words mean-again, a good thing for memory. Surveys suggest, however, that most students rely on study strategies that are much more passive than this-in fact, far too passive. Most students try to learn materials by simply rereading the textbook or reading over their notes several times. The problem with these strategies should be obvious: As the chapter explains, memories are produced by active engagement with materials, not by passive exposure.
As a related point, it's often useful to study with a friend-so that he or she can explain topics to you, and you can do the same in return. This step has several advantages. In explaining things, you're forced into a more active role. Working with a friend is also likely to enhance your understanding, because each of you can help the other to understand bits you're having trouble with. You'll also benefit from hearing your friend's perspective on the materials. This additional perspective offers the possibility of creating new connections among ideas, making the information easier to recall later on.
Memory will also be best if you spread your studying out across multiple occasions-using spaced learning (e.g., spreading out your learning across several days) rather than massed learning (essentially, "cramming" all at once). It also helps to vary your focus while studying-working on your history assignment for a while, then shifting to math, then over to the novel your English professor assigned, and then back to history. There are several reasons for this, including the fact that spaced learning and a changing focus will make it likely that you'll bring a somewhat different perspective to the material each time you turn to it. This new perspective will let you see connections you didn't see before; and-again-these new connections provide retrieval paths that can promote recall.
Spaced learning also has another advantage. With this form of learning, some time will pass between the episodes of learning. (Imagine, for example, that you study your sociology text for a while on Tuesday night and then return to it Thursday, these study sessions.) This situation allows so that two days go by between on some amount of forgetting to take place, and that's actually helpful because now each episode of learning will have to take a bit more effort, a bit more thought. This stands in contrast to massed learning, in which your second and third passes through the material may only be separated by a few minutes. In this setting, the second and third passes may feel easy enough so that you zoom through them, with little engagement in the material.
Note an ironic point here: Spaced learning may be more difficult (because of the forgetting in between sessions), but this difficulty leads to better learning overall. Researchers refer to this as "desirable difficulty"-difficulty that may feel obnoxious when you're slogging through the learn but that is material you hope to nonetheless beneficial, because it leaves you long-lasting complete, with more more memory.
What about mnemonic strategies, such as a peg-word system? These are enormously helpful-but often at a cost. When you're first learning new, focusing on a mnemonic can divert your time and attention away from efforts at something understanding the material, and so you’ll end up understanding the material less well. You'll also be not the multiple paths left with only the one or two retrieval paths that the mnemonic provides, created by comprehension. In some circumstances these drawbacks aren't serious-and so, for example, mnemonics are often useful for memorizing dates, place names, or particular bits of terminology. But for richer, more meaningful material, mnemonics may hurt you more than they help. Mnemonics can be more helpful, though, after you've understood the new material. Imagine that you've thoughtfully constructed a many-step argument or a complex derivation of a mathematical formula. Now, imagine that you hope to re-create the argument or the derivation later on-perhaps for an oral presentation or on an exam. În this situation, you've already achieved a level of mastery and you don't want to lose what you've gained. Here, a mnemonic (like the peg-word system) might be quite helpful, allowing you to remember the full argument or derivation in its proper sequence.
Finally, let's emphasize that there's more to say about these issues. Our discussion here (like Chapter 6 itself) focuses on the "input" side of memory-getting information into storage, so that it's available for use later on. There are also steps you can take that will help you to locate information in the vast warehouse of your memory, and still other steps that you can take to avoid forgetting materials you've already learned. Discussion of those steps, however, depends on materials we'll cover in Chapters 7 and 8. COGNITIVE PSYCHOLOGY AND THE LAW
the video-recorder view
One popular conception of memory is called the "video-recorder view." According to this commonsense view, everything in front of your eyes gets recorded into memory, much as a video camera records everything in front of the lens. This view seems to be widely held, but it is simply wrong. As Chapter 6 discusses, information gets established in memory only if you pay attention to it and think about it. Mere exposure isn't enough.
Wrong or not, the video-recorder view influences how many people think about memory- including eyewitness memory. For example, many people believe that it's possible to hypnotize an eyewitness and then "return" the (hypnotized) witness to the scene of the crime. The idea is that the witness will then be able to recall minute details- the exact words spoken, precisely how things appeared, and more. All this would make sense if memory were like a video recorder. In that case, hypnosis would be similar to rewinding the tape and playing it again, with the prospect of noting things on the "playback" that had been overlooked during the initial event. However, none of this is correct. There is no evidence that hypnosis improves memory. Details that were overlooked the first time are simply not recorded in memory, and neither hypnosis nor any other technique can bring them back. Our memories are also selective in a way that video recorders are not. You'd worry, of course, if your DVD player or the video recorder on your phone had gaps in the playback, skipping every other second or missing half the image. But memories, in contrast, often have gaps in them. People recall what they paid attention to, and if someone can't recall every part of an event, this merely tells us that the person wasn't paying attention to everything. (And, in light of what we know about attention, the person couldn't have paid attention to everything.) It's inevitable, then, that people (including eyewitnesses to crimes) will remember some aspects of the event but not others, and we shouldn't use the incompleteness of someone's memory as a basis for distrusting what the person does recall.
In addition, storage in memory is in some ways superior to storage in a video recorder, and in some ways worse. Humans are wonderfully able to draw information from many sources and integrate this information to produce knowledge that sensibly represents the full fabric of what they've learned. In this way, humans are better than recorders-because recorders are, in essence, stuck with whatever they recorded at the outset, with no way to update a recording if, for example, new information demands a reinterpretation of earlier events. At the same time, the recorder does preserve the initial event, "uncontaminated" by later knowledge. Here's an illustration of how these points play out. Participants in one study read a description of an interaction between a man and a woman. Half of the participants later learned that the man ended up proposing marriage to the woman. Other participants were told that the man later assaulted the woman. All participants were then asked to recall the original interaction as accurately as they could. Those who learned of the subsequent marriage recalled the original interaction as more romantic than it actually was; those who learned of the subsequent assault recalled the interaction as more aggressive than it was. In both cases, knowledge of events after the to-be- remembered interaction was apparently integrated with recall of the interaction itself. If we were to put this positively, light of subsequent information. (After all, why should you hold on to a view of this interaction that's we would say that memory for the interaction was appropriately "updated" in now been made obsolete by later-arriving information?) But if we were to put the same point negatively, we might say that memory for the interaction was "distorted" by knowledge of later events.
Other examples, contrasting our memories with video recorders, are easy to find. The overall idea, though, is that once we understand how memory works, we gain more realistic expectations about what eyewitnesses will or will not be able to remember. With these realistic expectations, we'll be in a much better position to evaluate and understand eyewitness evidence.