2019-04-01
AUTHORSMichele E. Gloede, Melissa K. Gregg
ABSTRACTRecent studies show that recognition memory for pictures is consistently better than recognition memory for sounds. The purpose of this experiment was to compare the fidelity of auditory and visual memory to better understand the reported differences in the two memory systems. Participants received a study phase with pictures/sounds, followed by a same-day memory test or a delayed recognition memory test. During the memory test, participants were presented with pictures/sounds that were old (presented during study), novel foils not presented during study, or exemplar foils that were variants of objects presented during study. Participants were instructed to classify each picture/sound as "old" or "new" by pressing a corresponding key. The same-day memory task revealed fundamental differences in visual and auditory memory: auditory representations are coarse and gist-based, while visual representations are highly detailed. However, auditory and visual memory performance was similar after a delay of 2 and 7 days and both types of memory representations were more coarse and gist-based. The results make an important contribution to our understanding of how the world is represented in auditory and visual memory. More... »
PAGES1-8
http://scigraph.springernature.com/pub.10.3758/s13423-019-01597-7
DOIhttp://dx.doi.org/10.3758/s13423-019-01597-7
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1113173481
PUBMEDhttps://www.ncbi.nlm.nih.gov/pubmed/30937831
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Psychology",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Psychology and Cognitive Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "University of Wisconsin\u2013Parkside",
"id": "https://www.grid.ac/institutes/grid.267475.5",
"name": [
"Department of Psychology, University of Wisconsin-Parkside, Parkside, 53144, Kenosha, WI, USA"
],
"type": "Organization"
},
"familyName": "Gloede",
"givenName": "Michele E.",
"type": "Person"
},
{
"affiliation": {
"alternateName": "University of Wisconsin\u2013Parkside",
"id": "https://www.grid.ac/institutes/grid.267475.5",
"name": [
"Department of Psychology, University of Wisconsin-Parkside, Parkside, 53144, Kenosha, WI, USA"
],
"type": "Organization"
},
"familyName": "Gregg",
"givenName": "Melissa K.",
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.3758/s13423-015-0800-0",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1000183977",
"https://doi.org/10.3758/s13423-015-0800-0"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/s13423-015-0800-0",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1000183977",
"https://doi.org/10.3758/s13423-015-0800-0"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0167-8760(03)00122-3",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1003344638"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0167-8760(03)00122-3",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1003344638"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/a0019165",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1003408183"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0022-5371(67)80067-7",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1003549531"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/1076-898x.9.2.101",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1006185614"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1371/journal.pone.0089914",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1007173012"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1073/pnas.0803390105",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1011697486"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1080/17470218.2016.1183686",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1013023119"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1080/17470218.2016.1183686",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1013023119"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1080/14640747308400340",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1013389232"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1080/14640747308400340",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1013389232"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/bf03337426",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1014088238",
"https://doi.org/10.3758/bf03337426"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.cub.2007.05.060",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1016079392"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1177/0956797612457375",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1018830642"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1177/0956797612457375",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1018830642"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0926-6410(01)00002-7",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1023428846"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s00221-007-0894-3",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1025525949",
"https://doi.org/10.1007/s00221-007-0894-3"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s00221-007-0894-3",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1025525949",
"https://doi.org/10.1007/s00221-007-0894-3"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/0278-7393.27.5.1211",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1025880065"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/0033-295x.83.2.157",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1028430078"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1073/pnas.0811884106",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1031147558"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/bf03203962",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1032616363",
"https://doi.org/10.3758/bf03203962"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1177/0956797610385359",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1034748657"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1177/0956797610385359",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1034748657"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/s13423-011-0074-0",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1040955933",
"https://doi.org/10.3758/s13423-011-0074-0"
],
"type": "CreativeWork"
}
],
"datePublished": "2019-04-01",
"datePublishedReg": "2019-04-01",
"description": "Recent studies show that recognition memory for pictures is consistently better than recognition memory for sounds. The purpose of this experiment was to compare the fidelity of auditory and visual memory to better understand the reported differences in the two memory systems. Participants received a study phase with pictures/sounds, followed by a same-day memory test or a delayed recognition memory test. During the memory test, participants were presented with pictures/sounds that were old (presented during study), novel foils not presented during study, or exemplar foils that were variants of objects presented during study. Participants were instructed to classify each picture/sound as \"old\" or \"new\" by pressing a corresponding key. The same-day memory task revealed fundamental differences in visual and auditory memory: auditory representations are coarse and gist-based, while visual representations are highly detailed. However, auditory and visual memory performance was similar after a delay of 2 and 7 days and both types of memory representations were more coarse and gist-based. The results make an important contribution to our understanding of how the world is represented in auditory and visual memory.",
"genre": "research_article",
"id": "sg:pub.10.3758/s13423-019-01597-7",
"inLanguage": [
"en"
],
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1082645",
"issn": [
"1069-9384",
"1531-5320"
],
"name": "Bulletin of the Psychonomic Society",
"type": "Periodical"
}
],
"name": "The fidelity of visual and auditory memory",
"pagination": "1-8",
"productId": [
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"702f59d4c1521cebb752c7cc9bcf71f8fa3942ada1c379fb519b5a074990c27b"
]
},
{
"name": "pubmed_id",
"type": "PropertyValue",
"value": [
"30937831"
]
},
{
"name": "nlm_unique_id",
"type": "PropertyValue",
"value": [
"9502924"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.3758/s13423-019-01597-7"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1113173481"
]
}
],
"sameAs": [
"https://doi.org/10.3758/s13423-019-01597-7",
"https://app.dimensions.ai/details/publication/pub.1113173481"
],
"sdDataset": "articles",
"sdDatePublished": "2019-04-11T14:02",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000371_0000000371/records_130830_00000006.jsonl",
"type": "ScholarlyArticle",
"url": "https://link.springer.com/10.3758%2Fs13423-019-01597-7"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.3758/s13423-019-01597-7'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.3758/s13423-019-01597-7'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.3758/s13423-019-01597-7'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.3758/s13423-019-01597-7'
This table displays all metadata directly associated to this object as RDF triples.
133 TRIPLES
21 PREDICATES
46 URIs
18 LITERALS
7 BLANK NODES