YEARS

2009-2012

AUTHORS

Michelle Greene

TITLE

Effect of Scene Contextual Relations for Guiding Real-World Visual Search

ABSTRACT

DESCRIPTION (provided by applicant): Visual search is a daily task for all of us, from finding our car keys to looking for a colleague in a crowd. Given the importance of this task, much research has been devoted to it, and thus we know a great deal about visual search in artificial two dimensional displays. However, visual search in the real world occurs in complex, yet highly structured three dimensional environments. What are the principles that guide search in real-world scenes? A separate line of research has highlighted role of contextual regularities between objects and scenes. In other words, knowing that a keyboard is found in offices helps the recognition of both keyboards and offices. Do such regularities help guide attention in real-world visual search problems? While the importance of these statistical regularities has been widely acknowledged, they have not been measured or quantified. It is necessary to measure these regularities to understand the role that they play in search. Here, we have amassed a large scene database of 3500 scenes and have completely measured all objects and regions in these scenes. This rich dataset includes information on what objects occur in different scene categories, and the spatial distributions of the objects'positions. We propose to analyze this dataset to extract statistical regularities existing between objects and their scene context, as well as regularities from the co-occurrence structure between objects. We will use the formal framework of information theory to quantify the degree of regularity in these relationships. This allows us to put an upper bound on the amount of guidance we can expect from these statistics. Then, we will perform behavioral experiments examining the use of these statistics in real-world visual search problems. These data allow us to ask questions and make predictions that have previously been impossible, therefore allowing real-world search to be studied in natural scenes in a controlled and principled way. Health relevance Understanding how attention is deployed in real-world visual search tasks has many public health implications. Understanding difficult visual search problems could lead to better accuracy at interpreting x- ray and MRI data, as well as search for abnormalities during endoscopic surgery. Furthermore, understanding search can help aid those whose search abilities are compromised due to visual (e.g. macular degeneration) or attentional (ADHD or age related cognitive decline) reasons.

FUNDED PUBLICATIONS

  • Basic level category structure emerges gradually across human ventral visual cortex.
  • Visual search in scenes involves selective and nonselective pathways.
  • Reconsidering Yarbus: a failure to predict observers' task from eye movement patterns.
  • Global image properties do not guide visual search.
  • How to use: Click on a object to move its position. Double click to open its homepage. Right click to preview its contents.

    Download the RDF metadata as:   json-ld nt turtle xml License info


    23 TRIPLES      17 PREDICATES      24 URIs      9 LITERALS

    Subject Predicate Object
    1 grants:edef4c0bade6c05027b55e9d76be6f65 sg:abstract DESCRIPTION (provided by applicant): Visual search is a daily task for all of us, from finding our car keys to looking for a colleague in a crowd. Given the importance of this task, much research has been devoted to it, and thus we know a great deal about visual search in artificial two dimensional displays. However, visual search in the real world occurs in complex, yet highly structured three dimensional environments. What are the principles that guide search in real-world scenes? A separate line of research has highlighted role of contextual regularities between objects and scenes. In other words, knowing that a keyboard is found in offices helps the recognition of both keyboards and offices. Do such regularities help guide attention in real-world visual search problems? While the importance of these statistical regularities has been widely acknowledged, they have not been measured or quantified. It is necessary to measure these regularities to understand the role that they play in search. Here, we have amassed a large scene database of 3500 scenes and have completely measured all objects and regions in these scenes. This rich dataset includes information on what objects occur in different scene categories, and the spatial distributions of the objects'positions. We propose to analyze this dataset to extract statistical regularities existing between objects and their scene context, as well as regularities from the co-occurrence structure between objects. We will use the formal framework of information theory to quantify the degree of regularity in these relationships. This allows us to put an upper bound on the amount of guidance we can expect from these statistics. Then, we will perform behavioral experiments examining the use of these statistics in real-world visual search problems. These data allow us to ask questions and make predictions that have previously been impossible, therefore allowing real-world search to be studied in natural scenes in a controlled and principled way. Health relevance Understanding how attention is deployed in real-world visual search tasks has many public health implications. Understanding difficult visual search problems could lead to better accuracy at interpreting x- ray and MRI data, as well as search for abnormalities during endoscopic surgery. Furthermore, understanding search can help aid those whose search abilities are compromised due to visual (e.g. macular degeneration) or attentional (ADHD or age related cognitive decline) reasons.
    2 sg:endYear 2012
    3 sg:fundingAmount 144150.0
    4 sg:fundingCurrency USD
    5 sg:hasContribution contributions:29acbc068759f07f26225a178d2ee136
    6 sg:hasFieldOfResearchCode anzsrc-for:08
    7 anzsrc-for:0801
    8 anzsrc-for:17
    9 anzsrc-for:1701
    10 sg:hasFundedPublication articles:346d1eb1a7a6e9809cb7360b94e8e5b6
    11 articles:a7072d5c3ffb5ceb781fe0dc87023745
    12 articles:bf258d9fe2b8e4c169f010aa93667d4b
    13 articles:e26cfdcda652226571d48dd8531067a9
    14 sg:hasFundingOrganization grid-institutes:grid.280030.9
    15 sg:hasRecipientOrganization grid-institutes:grid.168010.e
    16 sg:language English
    17 sg:license http://scigraph.springernature.com/explorer/license/
    18 sg:scigraphId edef4c0bade6c05027b55e9d76be6f65
    19 sg:startYear 2009
    20 sg:title Effect of Scene Contextual Relations for Guiding Real-World Visual Search
    21 sg:webpage http://projectreporter.nih.gov/project_info_description.cfm?aid=8142799
    22 rdf:type sg:Grant
    23 rdfs:label Grant: Effect of Scene Contextual Relations for Guiding Real-World Visual Search
    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular JSON format for linked data.

    curl -H 'Accept: application/ld+json' 'http://scigraph.springernature.com/things/grants/edef4c0bade6c05027b55e9d76be6f65'

    N-Triples is a line-based linked data format ideal for batch operations .

    curl -H 'Accept: application/n-triples' 'http://scigraph.springernature.com/things/grants/edef4c0bade6c05027b55e9d76be6f65'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'http://scigraph.springernature.com/things/grants/edef4c0bade6c05027b55e9d76be6f65'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'http://scigraph.springernature.com/things/grants/edef4c0bade6c05027b55e9d76be6f65'






    Preview window. Press ESC to close (or click here)


    ...