Ontology type: schema:ScholarlyArticle Open Access: True
2013-11
AUTHORSMidori Tokita, Yui Ashitani, Akira Ishiguchi
ABSTRACTThe numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes. More... »
PAGES1852-1861
http://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x
DOIhttp://dx.doi.org/10.3758/s13414-013-0526-x
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1045062022
PUBMEDhttps://www.ncbi.nlm.nih.gov/pubmed/23913137
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Psychology",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Psychology and Cognitive Sciences",
"type": "DefinedTerm"
},
{
"inDefinedTermSet": "https://www.nlm.nih.gov/mesh/",
"name": "Auditory Perception",
"type": "DefinedTerm"
},
{
"inDefinedTermSet": "https://www.nlm.nih.gov/mesh/",
"name": "Discrimination (Psychology)",
"type": "DefinedTerm"
},
{
"inDefinedTermSet": "https://www.nlm.nih.gov/mesh/",
"name": "Humans",
"type": "DefinedTerm"
},
{
"inDefinedTermSet": "https://www.nlm.nih.gov/mesh/",
"name": "Judgment",
"type": "DefinedTerm"
},
{
"inDefinedTermSet": "https://www.nlm.nih.gov/mesh/",
"name": "Mathematics",
"type": "DefinedTerm"
},
{
"inDefinedTermSet": "https://www.nlm.nih.gov/mesh/",
"name": "Pattern Recognition, Visual",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "Ochanomizu University",
"id": "https://www.grid.ac/institutes/grid.412314.1",
"name": [
"Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan"
],
"type": "Organization"
},
"familyName": "Tokita",
"givenName": "Midori",
"id": "sg:person.01131244633.54",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01131244633.54"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Ochanomizu University",
"id": "https://www.grid.ac/institutes/grid.412314.1",
"name": [
"Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan"
],
"type": "Organization"
},
"familyName": "Ashitani",
"givenName": "Yui",
"id": "sg:person.0733206000.62",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0733206000.62"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Ochanomizu University",
"id": "https://www.grid.ac/institutes/grid.412314.1",
"name": [
"Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan"
],
"type": "Organization"
},
"familyName": "Ishiguchi",
"givenName": "Akira",
"id": "sg:person.011400407201.42",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011400407201.42"
],
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.3758/bf03196206",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1000203552",
"https://doi.org/10.3758/bf03196206"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/s13423-011-0072-2",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1000445542",
"https://doi.org/10.3758/s13423-011-0072-2"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/0096-1523.7.6.1327",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1000540191"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.tics.2008.04.002",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1005283000"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s1364-6613(97)01008-5",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1005596851"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.cognition.2008.05.006",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1010040590"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1080/17470210500314729",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1010606360"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1080/17470210500314729",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1010606360"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/a0024965",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1010621418"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/bf03193716",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1011153081",
"https://doi.org/10.3758/bf03193716"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.1467-9280.2006.01696.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1011331490"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.1467-9280.2006.01696.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1011331490"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.tics.2010.09.008",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1017626612"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.brainres.2006.05.104",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1018575821"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1207/s15327078in0503_2",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1021413572"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.cub.2005.04.056",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1022175008"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0010-0277(02)00178-6",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1022330053"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0010-0277(02)00178-6",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1022330053"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/0010-0277(92)90050-r",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024259447"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.1467-9280.2006.01719.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024567238"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.1467-9280.2006.01719.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024567238"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.tics.2004.05.002",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1026002843"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/0096-1523.26.6.1770",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1027549707"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1073/pnas.0508107103",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1028978114"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.2044-8295.1975.tb01444.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1030797412"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1146/annurev.neuro.051508.135550",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1033969156"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/1467-9280.00120",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037068967"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/1467-9280.00120",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037068967"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.3758/app.72.3.561",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037840854",
"https://doi.org/10.3758/app.72.3.561"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1201/9780203009574.ch6",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1041780984"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/h0084035",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1042552137"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/0097-7403.9.3.320",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1044658027"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0010-0277(99)00066-9",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1045471172"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1167/7.10.2",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1046142055"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.1467-7687.2005.00429.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1046591974"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.1467-7687.2005.00429.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1046591974"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1098/rspb.2003.2414",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1047136967"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/j.2044-8295.1951.tb00302.x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1049235549"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/a0019961",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1049712152"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.brainres.2008.05.056",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1051307304"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1037/0012-1649.33.3.423",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1051433890"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/1467-9280.01453",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1053458273"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1111/1467-9280.01453",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1053458273"
],
"type": "CreativeWork"
},
{
"id": "https://app.dimensions.ai/details/publication/pub.1075258733",
"type": "CreativeWork"
}
],
"datePublished": "2013-11",
"datePublishedReg": "2013-11-01",
"description": "The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes. ",
"genre": "research_article",
"id": "sg:pub.10.3758/s13414-013-0526-x",
"inLanguage": [
"en"
],
"isAccessibleForFree": true,
"isFundedItemOf": [
{
"id": "sg:grant.6058193",
"type": "MonetaryGrant"
}
],
"isPartOf": [
{
"id": "sg:journal.1041037",
"issn": [
"1943-3921",
"1943-393X"
],
"name": "Attention, Perception, & Psychophysics",
"type": "Periodical"
},
{
"issueNumber": "8",
"type": "PublicationIssue"
},
{
"type": "PublicationVolume",
"volumeNumber": "75"
}
],
"name": "Is approximate numerical judgment truly modality-independent? Visual, auditory, and cross-modal comparisons",
"pagination": "1852-1861",
"productId": [
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"d6a52970ebb87187ddafe99304f142a0ea0b40f387eaf8cfa1fd54f0ae593444"
]
},
{
"name": "pubmed_id",
"type": "PropertyValue",
"value": [
"23913137"
]
},
{
"name": "nlm_unique_id",
"type": "PropertyValue",
"value": [
"101495384"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.3758/s13414-013-0526-x"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1045062022"
]
}
],
"sameAs": [
"https://doi.org/10.3758/s13414-013-0526-x",
"https://app.dimensions.ai/details/publication/pub.1045062022"
],
"sdDataset": "articles",
"sdDatePublished": "2019-04-10T18:19",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8675_00000507.jsonl",
"type": "ScholarlyArticle",
"url": "http://link.springer.com/10.3758%2Fs13414-013-0526-x"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'
This table displays all metadata directly associated to this object as RDF triples.
223 TRIPLES
21 PREDICATES
72 URIs
27 LITERALS
15 BLANK NODES