Face search in CCTV surveillance View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2019-09-23

AUTHORS

Mila Mileva, A. Mike Burton

ABSTRACT

BackgroundWe present a series of experiments on visual search in a highly complex environment, security closed-circuit television (CCTV). Using real surveillance footage from a large city transport hub, we ask viewers to search for target individuals. Search targets are presented in a number of ways, using naturally occurring images including their passports and photo ID, social media and custody images/videos. Our aim is to establish general principles for search efficiency within this realistic context.ResultsAcross four studies we find that providing multiple photos of the search target consistently improves performance. Three different photos of the target, taken at different times, give substantial performance improvements by comparison to a single target. By contrast, providing targets in moving videos or with biographical context does not lead to improvements in search accuracy.ConclusionsWe discuss the multiple-image advantage in relation to a growing understanding of the importance of within-person variability in face recognition. More... »

PAGES

37

References to SciGraph publications

  • 2007-10. A practical solution to the pervasive problems ofp values in PSYCHONOMIC BULLETIN & REVIEW
  • 2007-10. Individual differences in working memory capacity and visual search: The roles of top-down and bottom-up processing in PSYCHONOMIC BULLETIN & REVIEW
  • 2018-06-27. Individual differences in face identity processing in COGNITIVE RESEARCH: PRINCIPLES AND IMPLICATIONS
  • 1998-07. The role of dynamic information in the recognition of unfamiliar faces in MEMORY & COGNITION
  • 1977-01. Levels of processing in facial recognition memory in PSYCHONOMIC BULLETIN & REVIEW
  • 1984-07. Memory for faces: Encoding and retrieval operations in MEMORY & COGNITION
  • 2005-12-06. A search advantage for faces learned in motion in EXPERIMENTAL BRAIN RESEARCH
  • 1987-07. Target-distractor discriminability in visual search in ATTENTION, PERCEPTION, & PSYCHOPHYSICS
  • 2015-04-22. Automatic guidance of attention during real-world visual search in ATTENTION, PERCEPTION, & PSYCHOPHYSICS
  • 2010-02. The Glasgow Face Matching Test in BEHAVIOR RESEARCH METHODS
  • 2006-06. Unfamiliar faces are not faces: Evidence from a matching task in MEMORY & COGNITION
  • 2017-01-30. Individual differences predict low prevalence visual search performance in COGNITIVE RESEARCH: PRINCIPLES AND IMPLICATIONS
  • 2011-06-14. Visual search for arbitrary objects in real scenes in ATTENTION, PERCEPTION, & PSYCHOPHYSICS
  • 2007-02-14. The effects of closed-circuit television on crime: meta-analysis of an English national quasi-experimental multi-site evaluation in JOURNAL OF EXPERIMENTAL CRIMINOLOGY
  • 1996-03. GPOWER: A general power analysis program in BEHAVIOR RESEARCH METHODS
  • 1999-01. Scene-based and object-centered inhibition of return: Evidence for dual orienting mechanisms in ATTENTION, PERCEPTION, & PSYCHOPHYSICS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s41235-019-0193-0

    DOI

    http://dx.doi.org/10.1186/s41235-019-0193-0

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1121192787

    PUBMED

    https://www.ncbi.nlm.nih.gov/pubmed/31549263


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology and Cognitive Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1702", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Cognitive Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Department of Psychology, University of York, YO10 5DD, York, UK", 
              "id": "http://www.grid.ac/institutes/grid.5685.e", 
              "name": [
                "Department of Psychology, University of York, YO10 5DD, York, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Mileva", 
            "givenName": "Mila", 
            "id": "sg:person.013271347607.91", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013271347607.91"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Psychology, University of York, YO10 5DD, York, UK", 
              "id": "http://www.grid.ac/institutes/grid.5685.e", 
              "name": [
                "Department of Psychology, University of York, YO10 5DD, York, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Burton", 
            "givenName": "A. Mike", 
            "id": "sg:person.01261533174.87", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01261533174.87"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.3758/bf03193433", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1024937754", 
              "https://doi.org/10.3758/bf03193433"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/brm.42.1.286", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1015940686", 
              "https://doi.org/10.3758/brm.42.1.286"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11292-007-9024-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005645455", 
              "https://doi.org/10.1007/s11292-007-9024-2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/s13414-015-0903-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1035002253", 
              "https://doi.org/10.3758/s13414-015-0903-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03211397", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1036789239", 
              "https://doi.org/10.3758/bf03211397"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s41235-018-0112-9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1105142869", 
              "https://doi.org/10.1186/s41235-018-0112-9"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03211948", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005083652", 
              "https://doi.org/10.3758/bf03211948"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03336915", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1012811312", 
              "https://doi.org/10.3758/bf03336915"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03194109", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1046523920", 
              "https://doi.org/10.3758/bf03194109"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03208228", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1036058366", 
              "https://doi.org/10.3758/bf03208228"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03198293", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020426737", 
              "https://doi.org/10.3758/bf03198293"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03194105", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1028229859", 
              "https://doi.org/10.3758/bf03194105"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s41235-016-0042-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1074249646", 
              "https://doi.org/10.1186/s41235-016-0042-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/bf03203630", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1031867147", 
              "https://doi.org/10.3758/bf03203630"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00221-005-0283-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1000878601", 
              "https://doi.org/10.1007/s00221-005-0283-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/s13414-011-0153-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1027291403", 
              "https://doi.org/10.3758/s13414-011-0153-3"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-09-23", 
        "datePublishedReg": "2019-09-23", 
        "description": "BackgroundWe present a series of experiments on visual search in a highly complex environment, security closed-circuit television (CCTV). Using real surveillance footage from a large city transport hub, we ask viewers to search for target individuals. Search targets are presented in a number of ways, using naturally occurring images including their passports and photo ID, social media and custody images/videos. Our aim is to establish general principles for search efficiency within this realistic context.ResultsAcross four studies we find that providing multiple photos of the search target consistently improves performance. Three different photos of the target, taken at different times, give substantial performance improvements by comparison to a single target. By contrast, providing targets in moving videos or with biographical context does not lead to improvements in search accuracy.ConclusionsWe discuss the multiple-image advantage in relation to a growing understanding of the importance of within-person variability in face recognition.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s41235-019-0193-0", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.2752584", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.3957492", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1284451", 
            "issn": [
              "2365-7464"
            ], 
            "name": "Cognitive Research: Principles and Implications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "4"
          }
        ], 
        "keywords": [
          "closed-circuit television", 
          "images/videos", 
          "search target", 
          "substantial performance improvement", 
          "surveillance footage", 
          "search accuracy", 
          "face search", 
          "search efficiency", 
          "face recognition", 
          "CCTV surveillance", 
          "multiple photos", 
          "complex environments", 
          "different photos", 
          "performance improvement", 
          "video", 
          "social media", 
          "realistic context", 
          "visual search", 
          "photo ID", 
          "series of experiments", 
          "target individuals", 
          "photos", 
          "search", 
          "number of ways", 
          "transport hub", 
          "single target", 
          "footage", 
          "images", 
          "ID", 
          "viewers", 
          "recognition", 
          "accuracy", 
          "context", 
          "environment", 
          "passport", 
          "performance", 
          "advantages", 
          "improvement", 
          "efficiency", 
          "surveillance", 
          "way", 
          "hub", 
          "general principles", 
          "television", 
          "experiments", 
          "principles", 
          "different times", 
          "number", 
          "time", 
          "target", 
          "comparison", 
          "importance", 
          "understanding", 
          "medium", 
          "series", 
          "relation", 
          "aim", 
          "individuals", 
          "variability", 
          "study", 
          "contrast", 
          "person variability", 
          "ConclusionsWe", 
          "biographical context", 
          "BackgroundWe", 
          "Four studies"
        ], 
        "name": "Face search in CCTV surveillance", 
        "pagination": "37", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1121192787"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s41235-019-0193-0"
            ]
          }, 
          {
            "name": "pubmed_id", 
            "type": "PropertyValue", 
            "value": [
              "31549263"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s41235-019-0193-0", 
          "https://app.dimensions.ai/details/publication/pub.1121192787"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-05-10T10:26", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220509/entities/gbq_results/article/article_823.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s41235-019-0193-0"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s41235-019-0193-0'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s41235-019-0193-0'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s41235-019-0193-0'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s41235-019-0193-0'


     

    This table displays all metadata directly associated to this object as RDF triples.

    206 TRIPLES      22 PREDICATES      109 URIs      84 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s41235-019-0193-0 schema:about anzsrc-for:17
    2 anzsrc-for:1701
    3 anzsrc-for:1702
    4 schema:author Nba43ad7bd5a14eef9e88bf5c4bbdd36a
    5 schema:citation sg:pub.10.1007/s00221-005-0283-8
    6 sg:pub.10.1007/s11292-007-9024-2
    7 sg:pub.10.1186/s41235-016-0042-3
    8 sg:pub.10.1186/s41235-018-0112-9
    9 sg:pub.10.3758/bf03193433
    10 sg:pub.10.3758/bf03194105
    11 sg:pub.10.3758/bf03194109
    12 sg:pub.10.3758/bf03198293
    13 sg:pub.10.3758/bf03203630
    14 sg:pub.10.3758/bf03208228
    15 sg:pub.10.3758/bf03211397
    16 sg:pub.10.3758/bf03211948
    17 sg:pub.10.3758/bf03336915
    18 sg:pub.10.3758/brm.42.1.286
    19 sg:pub.10.3758/s13414-011-0153-3
    20 sg:pub.10.3758/s13414-015-0903-8
    21 schema:datePublished 2019-09-23
    22 schema:datePublishedReg 2019-09-23
    23 schema:description BackgroundWe present a series of experiments on visual search in a highly complex environment, security closed-circuit television (CCTV). Using real surveillance footage from a large city transport hub, we ask viewers to search for target individuals. Search targets are presented in a number of ways, using naturally occurring images including their passports and photo ID, social media and custody images/videos. Our aim is to establish general principles for search efficiency within this realistic context.ResultsAcross four studies we find that providing multiple photos of the search target consistently improves performance. Three different photos of the target, taken at different times, give substantial performance improvements by comparison to a single target. By contrast, providing targets in moving videos or with biographical context does not lead to improvements in search accuracy.ConclusionsWe discuss the multiple-image advantage in relation to a growing understanding of the importance of within-person variability in face recognition.
    24 schema:genre article
    25 schema:inLanguage en
    26 schema:isAccessibleForFree true
    27 schema:isPartOf N4a57066c35ec47a79ed6c222e01489bc
    28 Nf1845b428fc6484393104375246d2d77
    29 sg:journal.1284451
    30 schema:keywords BackgroundWe
    31 CCTV surveillance
    32 ConclusionsWe
    33 Four studies
    34 ID
    35 accuracy
    36 advantages
    37 aim
    38 biographical context
    39 closed-circuit television
    40 comparison
    41 complex environments
    42 context
    43 contrast
    44 different photos
    45 different times
    46 efficiency
    47 environment
    48 experiments
    49 face recognition
    50 face search
    51 footage
    52 general principles
    53 hub
    54 images
    55 images/videos
    56 importance
    57 improvement
    58 individuals
    59 medium
    60 multiple photos
    61 number
    62 number of ways
    63 passport
    64 performance
    65 performance improvement
    66 person variability
    67 photo ID
    68 photos
    69 principles
    70 realistic context
    71 recognition
    72 relation
    73 search
    74 search accuracy
    75 search efficiency
    76 search target
    77 series
    78 series of experiments
    79 single target
    80 social media
    81 study
    82 substantial performance improvement
    83 surveillance
    84 surveillance footage
    85 target
    86 target individuals
    87 television
    88 time
    89 transport hub
    90 understanding
    91 variability
    92 video
    93 viewers
    94 visual search
    95 way
    96 schema:name Face search in CCTV surveillance
    97 schema:pagination 37
    98 schema:productId N3d744c53d6184d4a957980773ec1b7fe
    99 N6a5c7377f3744a08b62ff76ed13697db
    100 Nf89af3b0dbd0452693e588df0b268dd4
    101 schema:sameAs https://app.dimensions.ai/details/publication/pub.1121192787
    102 https://doi.org/10.1186/s41235-019-0193-0
    103 schema:sdDatePublished 2022-05-10T10:26
    104 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    105 schema:sdPublisher N42b077f131d248a590822fe888ea7b58
    106 schema:url https://doi.org/10.1186/s41235-019-0193-0
    107 sgo:license sg:explorer/license/
    108 sgo:sdDataset articles
    109 rdf:type schema:ScholarlyArticle
    110 N3d744c53d6184d4a957980773ec1b7fe schema:name doi
    111 schema:value 10.1186/s41235-019-0193-0
    112 rdf:type schema:PropertyValue
    113 N42b077f131d248a590822fe888ea7b58 schema:name Springer Nature - SN SciGraph project
    114 rdf:type schema:Organization
    115 N4a57066c35ec47a79ed6c222e01489bc schema:issueNumber 1
    116 rdf:type schema:PublicationIssue
    117 N6a5c7377f3744a08b62ff76ed13697db schema:name pubmed_id
    118 schema:value 31549263
    119 rdf:type schema:PropertyValue
    120 N6a796b6741ad4c57a28c4b245d598409 rdf:first sg:person.01261533174.87
    121 rdf:rest rdf:nil
    122 Nba43ad7bd5a14eef9e88bf5c4bbdd36a rdf:first sg:person.013271347607.91
    123 rdf:rest N6a796b6741ad4c57a28c4b245d598409
    124 Nf1845b428fc6484393104375246d2d77 schema:volumeNumber 4
    125 rdf:type schema:PublicationVolume
    126 Nf89af3b0dbd0452693e588df0b268dd4 schema:name dimensions_id
    127 schema:value pub.1121192787
    128 rdf:type schema:PropertyValue
    129 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
    130 schema:name Psychology and Cognitive Sciences
    131 rdf:type schema:DefinedTerm
    132 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
    133 schema:name Psychology
    134 rdf:type schema:DefinedTerm
    135 anzsrc-for:1702 schema:inDefinedTermSet anzsrc-for:
    136 schema:name Cognitive Sciences
    137 rdf:type schema:DefinedTerm
    138 sg:grant.2752584 http://pending.schema.org/fundedItem sg:pub.10.1186/s41235-019-0193-0
    139 rdf:type schema:MonetaryGrant
    140 sg:grant.3957492 http://pending.schema.org/fundedItem sg:pub.10.1186/s41235-019-0193-0
    141 rdf:type schema:MonetaryGrant
    142 sg:journal.1284451 schema:issn 2365-7464
    143 schema:name Cognitive Research: Principles and Implications
    144 schema:publisher Springer Nature
    145 rdf:type schema:Periodical
    146 sg:person.01261533174.87 schema:affiliation grid-institutes:grid.5685.e
    147 schema:familyName Burton
    148 schema:givenName A. Mike
    149 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01261533174.87
    150 rdf:type schema:Person
    151 sg:person.013271347607.91 schema:affiliation grid-institutes:grid.5685.e
    152 schema:familyName Mileva
    153 schema:givenName Mila
    154 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013271347607.91
    155 rdf:type schema:Person
    156 sg:pub.10.1007/s00221-005-0283-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000878601
    157 https://doi.org/10.1007/s00221-005-0283-8
    158 rdf:type schema:CreativeWork
    159 sg:pub.10.1007/s11292-007-9024-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005645455
    160 https://doi.org/10.1007/s11292-007-9024-2
    161 rdf:type schema:CreativeWork
    162 sg:pub.10.1186/s41235-016-0042-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1074249646
    163 https://doi.org/10.1186/s41235-016-0042-3
    164 rdf:type schema:CreativeWork
    165 sg:pub.10.1186/s41235-018-0112-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1105142869
    166 https://doi.org/10.1186/s41235-018-0112-9
    167 rdf:type schema:CreativeWork
    168 sg:pub.10.3758/bf03193433 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024937754
    169 https://doi.org/10.3758/bf03193433
    170 rdf:type schema:CreativeWork
    171 sg:pub.10.3758/bf03194105 schema:sameAs https://app.dimensions.ai/details/publication/pub.1028229859
    172 https://doi.org/10.3758/bf03194105
    173 rdf:type schema:CreativeWork
    174 sg:pub.10.3758/bf03194109 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046523920
    175 https://doi.org/10.3758/bf03194109
    176 rdf:type schema:CreativeWork
    177 sg:pub.10.3758/bf03198293 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020426737
    178 https://doi.org/10.3758/bf03198293
    179 rdf:type schema:CreativeWork
    180 sg:pub.10.3758/bf03203630 schema:sameAs https://app.dimensions.ai/details/publication/pub.1031867147
    181 https://doi.org/10.3758/bf03203630
    182 rdf:type schema:CreativeWork
    183 sg:pub.10.3758/bf03208228 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036058366
    184 https://doi.org/10.3758/bf03208228
    185 rdf:type schema:CreativeWork
    186 sg:pub.10.3758/bf03211397 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036789239
    187 https://doi.org/10.3758/bf03211397
    188 rdf:type schema:CreativeWork
    189 sg:pub.10.3758/bf03211948 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005083652
    190 https://doi.org/10.3758/bf03211948
    191 rdf:type schema:CreativeWork
    192 sg:pub.10.3758/bf03336915 schema:sameAs https://app.dimensions.ai/details/publication/pub.1012811312
    193 https://doi.org/10.3758/bf03336915
    194 rdf:type schema:CreativeWork
    195 sg:pub.10.3758/brm.42.1.286 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015940686
    196 https://doi.org/10.3758/brm.42.1.286
    197 rdf:type schema:CreativeWork
    198 sg:pub.10.3758/s13414-011-0153-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027291403
    199 https://doi.org/10.3758/s13414-011-0153-3
    200 rdf:type schema:CreativeWork
    201 sg:pub.10.3758/s13414-015-0903-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035002253
    202 https://doi.org/10.3758/s13414-015-0903-8
    203 rdf:type schema:CreativeWork
    204 grid-institutes:grid.5685.e schema:alternateName Department of Psychology, University of York, YO10 5DD, York, UK
    205 schema:name Department of Psychology, University of York, YO10 5DD, York, UK
    206 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...