Is approximate numerical judgment truly modality-independent? Visual, auditory, and cross-modal comparisons View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2013-11

AUTHORS

Midori Tokita, Yui Ashitani, Akira Ishiguchi

ABSTRACT

The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes. More... »

PAGES

1852-1861

Identifiers

URI

http://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x

DOI

http://dx.doi.org/10.3758/s13414-013-0526-x

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1045062022

PUBMED

https://www.ncbi.nlm.nih.gov/pubmed/23913137


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology and Cognitive Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Auditory Perception", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Discrimination (Psychology)", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Humans", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Judgment", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Mathematics", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Pattern Recognition, Visual", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Ochanomizu University", 
          "id": "https://www.grid.ac/institutes/grid.412314.1", 
          "name": [
            "Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Tokita", 
        "givenName": "Midori", 
        "id": "sg:person.01131244633.54", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01131244633.54"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Ochanomizu University", 
          "id": "https://www.grid.ac/institutes/grid.412314.1", 
          "name": [
            "Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Ashitani", 
        "givenName": "Yui", 
        "id": "sg:person.0733206000.62", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0733206000.62"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Ochanomizu University", 
          "id": "https://www.grid.ac/institutes/grid.412314.1", 
          "name": [
            "Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Ishiguchi", 
        "givenName": "Akira", 
        "id": "sg:person.011400407201.42", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011400407201.42"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.3758/bf03196206", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000203552", 
          "https://doi.org/10.3758/bf03196206"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.3758/s13423-011-0072-2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000445542", 
          "https://doi.org/10.3758/s13423-011-0072-2"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/0096-1523.7.6.1327", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000540191"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.tics.2008.04.002", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1005283000"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s1364-6613(97)01008-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1005596851"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.cognition.2008.05.006", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1010040590"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1080/17470210500314729", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1010606360"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1080/17470210500314729", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1010606360"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/a0024965", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1010621418"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.3758/bf03193716", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1011153081", 
          "https://doi.org/10.3758/bf03193716"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1467-9280.2006.01696.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1011331490"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1467-9280.2006.01696.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1011331490"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.tics.2010.09.008", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1017626612"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.brainres.2006.05.104", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1018575821"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1207/s15327078in0503_2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1021413572"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.cub.2005.04.056", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022175008"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0010-0277(02)00178-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022330053"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0010-0277(02)00178-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022330053"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0010-0277(92)90050-r", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1024259447"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1467-9280.2006.01719.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1024567238"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1467-9280.2006.01719.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1024567238"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.tics.2004.05.002", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1026002843"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/0096-1523.26.6.1770", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1027549707"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1073/pnas.0508107103", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1028978114"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.2044-8295.1975.tb01444.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1030797412"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1146/annurev.neuro.051508.135550", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1033969156"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/1467-9280.00120", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037068967"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/1467-9280.00120", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037068967"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.3758/app.72.3.561", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037840854", 
          "https://doi.org/10.3758/app.72.3.561"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1201/9780203009574.ch6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1041780984"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/h0084035", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042552137"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/0097-7403.9.3.320", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1044658027"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0010-0277(99)00066-9", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1045471172"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1167/7.10.2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046142055"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1467-7687.2005.00429.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046591974"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1467-7687.2005.00429.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046591974"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1098/rspb.2003.2414", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1047136967"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.2044-8295.1951.tb00302.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1049235549"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/a0019961", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1049712152"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.brainres.2008.05.056", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1051307304"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/0012-1649.33.3.423", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1051433890"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/1467-9280.01453", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1053458273"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/1467-9280.01453", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1053458273"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://app.dimensions.ai/details/publication/pub.1075258733", 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2013-11", 
    "datePublishedReg": "2013-11-01", 
    "description": "The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes. ", 
    "genre": "research_article", 
    "id": "sg:pub.10.3758/s13414-013-0526-x", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isFundedItemOf": [
      {
        "id": "sg:grant.6058193", 
        "type": "MonetaryGrant"
      }
    ], 
    "isPartOf": [
      {
        "id": "sg:journal.1041037", 
        "issn": [
          "1943-3921", 
          "1943-393X"
        ], 
        "name": "Attention, Perception, & Psychophysics", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "8", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "75"
      }
    ], 
    "name": "Is approximate numerical judgment truly modality-independent? Visual, auditory, and cross-modal comparisons", 
    "pagination": "1852-1861", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "d6a52970ebb87187ddafe99304f142a0ea0b40f387eaf8cfa1fd54f0ae593444"
        ]
      }, 
      {
        "name": "pubmed_id", 
        "type": "PropertyValue", 
        "value": [
          "23913137"
        ]
      }, 
      {
        "name": "nlm_unique_id", 
        "type": "PropertyValue", 
        "value": [
          "101495384"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.3758/s13414-013-0526-x"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1045062022"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.3758/s13414-013-0526-x", 
      "https://app.dimensions.ai/details/publication/pub.1045062022"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-10T18:19", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8675_00000507.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.3758%2Fs13414-013-0526-x"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.3758/s13414-013-0526-x'


 

This table displays all metadata directly associated to this object as RDF triples.

223 TRIPLES      21 PREDICATES      72 URIs      27 LITERALS      15 BLANK NODES

Subject Predicate Object
1 sg:pub.10.3758/s13414-013-0526-x schema:about N1805a0f295994994889e6534033a4477
2 N1b5aaae82a8e4601a451bf5158675068
3 N524a4d9a1b6c42329e73a429246e3e2e
4 N8a429739c7044794be30e745345fe87b
5 Na78a08fc8430401298d67e3eb8ce8a9b
6 Nb870cc0173864feb8980bf2355944afd
7 anzsrc-for:17
8 anzsrc-for:1701
9 schema:author Nee6caecb4d5445f886e67e9f37c1f056
10 schema:citation sg:pub.10.3758/app.72.3.561
11 sg:pub.10.3758/bf03193716
12 sg:pub.10.3758/bf03196206
13 sg:pub.10.3758/s13423-011-0072-2
14 https://app.dimensions.ai/details/publication/pub.1075258733
15 https://doi.org/10.1016/0010-0277(92)90050-r
16 https://doi.org/10.1016/j.brainres.2006.05.104
17 https://doi.org/10.1016/j.brainres.2008.05.056
18 https://doi.org/10.1016/j.cognition.2008.05.006
19 https://doi.org/10.1016/j.cub.2005.04.056
20 https://doi.org/10.1016/j.tics.2004.05.002
21 https://doi.org/10.1016/j.tics.2008.04.002
22 https://doi.org/10.1016/j.tics.2010.09.008
23 https://doi.org/10.1016/s0010-0277(02)00178-6
24 https://doi.org/10.1016/s0010-0277(99)00066-9
25 https://doi.org/10.1016/s1364-6613(97)01008-5
26 https://doi.org/10.1037/0012-1649.33.3.423
27 https://doi.org/10.1037/0096-1523.26.6.1770
28 https://doi.org/10.1037/0096-1523.7.6.1327
29 https://doi.org/10.1037/0097-7403.9.3.320
30 https://doi.org/10.1037/a0019961
31 https://doi.org/10.1037/a0024965
32 https://doi.org/10.1037/h0084035
33 https://doi.org/10.1073/pnas.0508107103
34 https://doi.org/10.1080/17470210500314729
35 https://doi.org/10.1098/rspb.2003.2414
36 https://doi.org/10.1111/1467-9280.00120
37 https://doi.org/10.1111/1467-9280.01453
38 https://doi.org/10.1111/j.1467-7687.2005.00429.x
39 https://doi.org/10.1111/j.1467-9280.2006.01696.x
40 https://doi.org/10.1111/j.1467-9280.2006.01719.x
41 https://doi.org/10.1111/j.2044-8295.1951.tb00302.x
42 https://doi.org/10.1111/j.2044-8295.1975.tb01444.x
43 https://doi.org/10.1146/annurev.neuro.051508.135550
44 https://doi.org/10.1167/7.10.2
45 https://doi.org/10.1201/9780203009574.ch6
46 https://doi.org/10.1207/s15327078in0503_2
47 schema:datePublished 2013-11
48 schema:datePublishedReg 2013-11-01
49 schema:description The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes.
50 schema:genre research_article
51 schema:inLanguage en
52 schema:isAccessibleForFree true
53 schema:isPartOf N0ec5255f89674cceafcd6dcafa494f22
54 N2c702da463a94f5ca888d0d88bc28a6b
55 sg:journal.1041037
56 schema:name Is approximate numerical judgment truly modality-independent? Visual, auditory, and cross-modal comparisons
57 schema:pagination 1852-1861
58 schema:productId N3bd5c32d212642a09b067b825092a18b
59 N4b7a5634935247e8a388f47aa3b7a467
60 N82a41fc4e8914d498553922e5b95fe1d
61 Nc0a94b9e6dc741c68925afcd437736fd
62 Nd4d87d4d69e047c28981aac822e97403
63 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045062022
64 https://doi.org/10.3758/s13414-013-0526-x
65 schema:sdDatePublished 2019-04-10T18:19
66 schema:sdLicense https://scigraph.springernature.com/explorer/license/
67 schema:sdPublisher N54310dcf96e64bdd8c5befb7fbbd1550
68 schema:url http://link.springer.com/10.3758%2Fs13414-013-0526-x
69 sgo:license sg:explorer/license/
70 sgo:sdDataset articles
71 rdf:type schema:ScholarlyArticle
72 N0ec5255f89674cceafcd6dcafa494f22 schema:issueNumber 8
73 rdf:type schema:PublicationIssue
74 N0fe2e90557654a32ad4d5b13d2585f4c rdf:first sg:person.0733206000.62
75 rdf:rest N3d3ce5838f1c425caf654bf4ecf8482a
76 N1805a0f295994994889e6534033a4477 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
77 schema:name Humans
78 rdf:type schema:DefinedTerm
79 N1b5aaae82a8e4601a451bf5158675068 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
80 schema:name Mathematics
81 rdf:type schema:DefinedTerm
82 N2c702da463a94f5ca888d0d88bc28a6b schema:volumeNumber 75
83 rdf:type schema:PublicationVolume
84 N3bd5c32d212642a09b067b825092a18b schema:name doi
85 schema:value 10.3758/s13414-013-0526-x
86 rdf:type schema:PropertyValue
87 N3d3ce5838f1c425caf654bf4ecf8482a rdf:first sg:person.011400407201.42
88 rdf:rest rdf:nil
89 N4b7a5634935247e8a388f47aa3b7a467 schema:name dimensions_id
90 schema:value pub.1045062022
91 rdf:type schema:PropertyValue
92 N524a4d9a1b6c42329e73a429246e3e2e schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
93 schema:name Pattern Recognition, Visual
94 rdf:type schema:DefinedTerm
95 N54310dcf96e64bdd8c5befb7fbbd1550 schema:name Springer Nature - SN SciGraph project
96 rdf:type schema:Organization
97 N82a41fc4e8914d498553922e5b95fe1d schema:name pubmed_id
98 schema:value 23913137
99 rdf:type schema:PropertyValue
100 N8a429739c7044794be30e745345fe87b schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
101 schema:name Discrimination (Psychology)
102 rdf:type schema:DefinedTerm
103 Na78a08fc8430401298d67e3eb8ce8a9b schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
104 schema:name Auditory Perception
105 rdf:type schema:DefinedTerm
106 Nb870cc0173864feb8980bf2355944afd schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
107 schema:name Judgment
108 rdf:type schema:DefinedTerm
109 Nc0a94b9e6dc741c68925afcd437736fd schema:name nlm_unique_id
110 schema:value 101495384
111 rdf:type schema:PropertyValue
112 Nd4d87d4d69e047c28981aac822e97403 schema:name readcube_id
113 schema:value d6a52970ebb87187ddafe99304f142a0ea0b40f387eaf8cfa1fd54f0ae593444
114 rdf:type schema:PropertyValue
115 Nee6caecb4d5445f886e67e9f37c1f056 rdf:first sg:person.01131244633.54
116 rdf:rest N0fe2e90557654a32ad4d5b13d2585f4c
117 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
118 schema:name Psychology and Cognitive Sciences
119 rdf:type schema:DefinedTerm
120 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
121 schema:name Psychology
122 rdf:type schema:DefinedTerm
123 sg:grant.6058193 http://pending.schema.org/fundedItem sg:pub.10.3758/s13414-013-0526-x
124 rdf:type schema:MonetaryGrant
125 sg:journal.1041037 schema:issn 1943-3921
126 1943-393X
127 schema:name Attention, Perception, & Psychophysics
128 rdf:type schema:Periodical
129 sg:person.01131244633.54 schema:affiliation https://www.grid.ac/institutes/grid.412314.1
130 schema:familyName Tokita
131 schema:givenName Midori
132 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01131244633.54
133 rdf:type schema:Person
134 sg:person.011400407201.42 schema:affiliation https://www.grid.ac/institutes/grid.412314.1
135 schema:familyName Ishiguchi
136 schema:givenName Akira
137 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011400407201.42
138 rdf:type schema:Person
139 sg:person.0733206000.62 schema:affiliation https://www.grid.ac/institutes/grid.412314.1
140 schema:familyName Ashitani
141 schema:givenName Yui
142 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0733206000.62
143 rdf:type schema:Person
144 sg:pub.10.3758/app.72.3.561 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037840854
145 https://doi.org/10.3758/app.72.3.561
146 rdf:type schema:CreativeWork
147 sg:pub.10.3758/bf03193716 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011153081
148 https://doi.org/10.3758/bf03193716
149 rdf:type schema:CreativeWork
150 sg:pub.10.3758/bf03196206 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000203552
151 https://doi.org/10.3758/bf03196206
152 rdf:type schema:CreativeWork
153 sg:pub.10.3758/s13423-011-0072-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000445542
154 https://doi.org/10.3758/s13423-011-0072-2
155 rdf:type schema:CreativeWork
156 https://app.dimensions.ai/details/publication/pub.1075258733 schema:CreativeWork
157 https://doi.org/10.1016/0010-0277(92)90050-r schema:sameAs https://app.dimensions.ai/details/publication/pub.1024259447
158 rdf:type schema:CreativeWork
159 https://doi.org/10.1016/j.brainres.2006.05.104 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018575821
160 rdf:type schema:CreativeWork
161 https://doi.org/10.1016/j.brainres.2008.05.056 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051307304
162 rdf:type schema:CreativeWork
163 https://doi.org/10.1016/j.cognition.2008.05.006 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010040590
164 rdf:type schema:CreativeWork
165 https://doi.org/10.1016/j.cub.2005.04.056 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022175008
166 rdf:type schema:CreativeWork
167 https://doi.org/10.1016/j.tics.2004.05.002 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026002843
168 rdf:type schema:CreativeWork
169 https://doi.org/10.1016/j.tics.2008.04.002 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005283000
170 rdf:type schema:CreativeWork
171 https://doi.org/10.1016/j.tics.2010.09.008 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017626612
172 rdf:type schema:CreativeWork
173 https://doi.org/10.1016/s0010-0277(02)00178-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022330053
174 rdf:type schema:CreativeWork
175 https://doi.org/10.1016/s0010-0277(99)00066-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045471172
176 rdf:type schema:CreativeWork
177 https://doi.org/10.1016/s1364-6613(97)01008-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005596851
178 rdf:type schema:CreativeWork
179 https://doi.org/10.1037/0012-1649.33.3.423 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051433890
180 rdf:type schema:CreativeWork
181 https://doi.org/10.1037/0096-1523.26.6.1770 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027549707
182 rdf:type schema:CreativeWork
183 https://doi.org/10.1037/0096-1523.7.6.1327 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000540191
184 rdf:type schema:CreativeWork
185 https://doi.org/10.1037/0097-7403.9.3.320 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044658027
186 rdf:type schema:CreativeWork
187 https://doi.org/10.1037/a0019961 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049712152
188 rdf:type schema:CreativeWork
189 https://doi.org/10.1037/a0024965 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010621418
190 rdf:type schema:CreativeWork
191 https://doi.org/10.1037/h0084035 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042552137
192 rdf:type schema:CreativeWork
193 https://doi.org/10.1073/pnas.0508107103 schema:sameAs https://app.dimensions.ai/details/publication/pub.1028978114
194 rdf:type schema:CreativeWork
195 https://doi.org/10.1080/17470210500314729 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010606360
196 rdf:type schema:CreativeWork
197 https://doi.org/10.1098/rspb.2003.2414 schema:sameAs https://app.dimensions.ai/details/publication/pub.1047136967
198 rdf:type schema:CreativeWork
199 https://doi.org/10.1111/1467-9280.00120 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037068967
200 rdf:type schema:CreativeWork
201 https://doi.org/10.1111/1467-9280.01453 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053458273
202 rdf:type schema:CreativeWork
203 https://doi.org/10.1111/j.1467-7687.2005.00429.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1046591974
204 rdf:type schema:CreativeWork
205 https://doi.org/10.1111/j.1467-9280.2006.01696.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1011331490
206 rdf:type schema:CreativeWork
207 https://doi.org/10.1111/j.1467-9280.2006.01719.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1024567238
208 rdf:type schema:CreativeWork
209 https://doi.org/10.1111/j.2044-8295.1951.tb00302.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1049235549
210 rdf:type schema:CreativeWork
211 https://doi.org/10.1111/j.2044-8295.1975.tb01444.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1030797412
212 rdf:type schema:CreativeWork
213 https://doi.org/10.1146/annurev.neuro.051508.135550 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033969156
214 rdf:type schema:CreativeWork
215 https://doi.org/10.1167/7.10.2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046142055
216 rdf:type schema:CreativeWork
217 https://doi.org/10.1201/9780203009574.ch6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1041780984
218 rdf:type schema:CreativeWork
219 https://doi.org/10.1207/s15327078in0503_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021413572
220 rdf:type schema:CreativeWork
221 https://www.grid.ac/institutes/grid.412314.1 schema:alternateName Ochanomizu University
222 schema:name Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, 112-8610, Tokyo, Japan
223 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...