YEARS

2011-2015

AUTHORS

Marc Pomplun

TITLE

Semantic Guidance of Visual Attention

ABSTRACT

DESCRIPTION (provided by applicant): Our visual world does not only consist of low-level features such as color or contrast, but it also contains high- level features such as the meaning of objects and the semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. The proposed project will study guidance of eye movements by semantic similarity between objects during real-world scene inspection and search using a novel methodology that was developed in preliminary studies. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one (transitional semantic guidance). Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target (target-induced semantic guidance). These preliminary findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. The proposed project will build on these results to establish a new field of research and a broader model of attentional control in real-world scenes. First, it will investigate two potential factors that control semantic guidance, namely the observer's individual semantic space (Study 1) and the semantic consistency of the visual scene (Study 2). Second, the project will examine the ecological function of two aspects of semantic guidance: the finding of transitional semantic guidance during scene inspection (Study 3) and the observation of a gradual increase in target-induced semantic guidance during the course of a search process (Study 4). The results of Studies 1 to 4 will direct the course of future behavioral and neurophysiological investigations of semantic guidance. Moreover, the first, basic understanding of semantic guidance developed in these studies will be used in Study 5 to devise a computational model of this knowledge and combine it with traditional models of attentional control that are limited to the influence of low-level visual features. The resulting two-level attentional control model will advance the field by helping researchers to form a more comprehensive view of visual attention in real-world scenes.

FUNDED PUBLICATIONS

  • Predicting raters’ transparency judgments of English and Chinese morphological constituents using latent semantic analysis
  • The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.
  • The attraction of visual attention to texts in real-world scenes.
  • Predicting raters' transparency judgments of English and Chinese morphological constituents using latent semantic analysis.
  • Using singular value decomposition to investigate degraded Chinese character recognition: evidence from eye movements during reading.
  • How to use: Click on a object to move its position. Double click to open its homepage. Right click to preview its contents.

    Download the RDF metadata as:   json-ld nt turtle xml License info


    22 TRIPLES      17 PREDICATES      23 URIs      9 LITERALS

    Subject Predicate Object
    1 grants:a5d1a8a910787e957e4d02b39d5ba2ba sg:abstract DESCRIPTION (provided by applicant): Our visual world does not only consist of low-level features such as color or contrast, but it also contains high- level features such as the meaning of objects and the semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. The proposed project will study guidance of eye movements by semantic similarity between objects during real-world scene inspection and search using a novel methodology that was developed in preliminary studies. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one (transitional semantic guidance). Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target (target-induced semantic guidance). These preliminary findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. The proposed project will build on these results to establish a new field of research and a broader model of attentional control in real-world scenes. First, it will investigate two potential factors that control semantic guidance, namely the observer's individual semantic space (Study 1) and the semantic consistency of the visual scene (Study 2). Second, the project will examine the ecological function of two aspects of semantic guidance: the finding of transitional semantic guidance during scene inspection (Study 3) and the observation of a gradual increase in target-induced semantic guidance during the course of a search process (Study 4). The results of Studies 1 to 4 will direct the course of future behavioral and neurophysiological investigations of semantic guidance. Moreover, the first, basic understanding of semantic guidance developed in these studies will be used in Study 5 to devise a computational model of this knowledge and combine it with traditional models of attentional control that are limited to the influence of low-level visual features. The resulting two-level attentional control model will advance the field by helping researchers to form a more comprehensive view of visual attention in real-world scenes.
    2 sg:endYear 2015
    3 sg:fundingAmount 652974.0
    4 sg:fundingCurrency USD
    5 sg:hasContribution contributions:6585ef0c9d1a97226fa3c4d98eeb44ef
    6 sg:hasFieldOfResearchCode anzsrc-for:17
    7 anzsrc-for:1701
    8 sg:hasFundedPublication articles:70989d50cbd8746edd55411c0709861a
    9 articles:8d4fd25d694e5ee931db099292163718
    10 articles:d807efe1bab728ae6953fe49e156cc03
    11 articles:f1721a72db903a79a0b6235fefa2b749
    12 articles:fb1f51f3e760a402d926ed8ce01aee0b
    13 sg:hasFundingOrganization grid-institutes:grid.280030.9
    14 sg:hasRecipientOrganization grid-institutes:grid.266685.9
    15 sg:language English
    16 sg:license http://scigraph.springernature.com/explorer/license/
    17 sg:scigraphId a5d1a8a910787e957e4d02b39d5ba2ba
    18 sg:startYear 2011
    19 sg:title Semantic Guidance of Visual Attention
    20 sg:webpage http://projectreporter.nih.gov/project_info_description.cfm?aid=8581346
    21 rdf:type sg:Grant
    22 rdfs:label Grant: Semantic Guidance of Visual Attention
    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular JSON format for linked data.

    curl -H 'Accept: application/ld+json' 'http://scigraph.springernature.com/things/grants/a5d1a8a910787e957e4d02b39d5ba2ba'

    N-Triples is a line-based linked data format ideal for batch operations .

    curl -H 'Accept: application/n-triples' 'http://scigraph.springernature.com/things/grants/a5d1a8a910787e957e4d02b39d5ba2ba'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'http://scigraph.springernature.com/things/grants/a5d1a8a910787e957e4d02b39d5ba2ba'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'http://scigraph.springernature.com/things/grants/a5d1a8a910787e957e4d02b39d5ba2ba'






    Preview window. Press ESC to close (or click here)


    ...