Fast and scalable structure-from-motion based localization for high-precision mobile augmented reality systems View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2016-07-19

AUTHORS

Hyojoon Bae, Michael Walker, Jules White, Yao Pan, Yu Sun, Mani Golparvar-Fard

ABSTRACT

A key problem in mobile computing is providing people access to cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that addresses this problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects’ imagery. This paper presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user’s position. Rather, the user’s 3D location and orientation are automatically and purely derived by comparing images from the user’s mobile device to a 3D point cloud model generated from a set of pre-collected photographs. Our approach supports content authoring where collaboration on editing the content stored in the 3D cloud is possible and content added by one user can be immediately accessible by others. In addition, a key challenge of scalability for mobile augmented reality is addressed in this paper. In general, mobile augmented reality is required to work regardless of users’ location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing a multi-model based direct 2D-to-3D matching algorithms for localization, as well as applying a caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users’ location, size of physical objects, and number of cyber-physical information items. Empirical results presented in the paper show that the approach can provide millimeter-level augmented reality across several hundred or thousand objects without the need for additional non-imagery sensor inputs. More... »

PAGES

4

References to SciGraph publications

  • 2012. Towards Fast Image-Based Localization on a City-Scale in OUTDOOR AND LARGE-SCALE REAL-WORLD SCENE ANALYSIS
  • 2000. Bundle Adjustment — A Modern Synthesis in VISION ALGORITHMS: THEORY AND PRACTICE
  • 2013-06-12. SMART: scalable and modular augmented reality template for rapid development of engineering visualization applications in VISUALIZATION IN ENGINEERING
  • 2013-06-12. High-precision vision-based mobile augmented reality system for context-aware architectural, engineering, construction and facility management (AEC/FM) applications in VISUALIZATION IN ENGINEERING
  • 2010. Building Rome on a Cloudless Day in COMPUTER VISION – ECCV 2010
  • 2006. What and Where: 3D Object Recognition with Accurate Pose in TOWARD CATEGORY-LEVEL OBJECT RECOGNITION
  • 2010. Addressing Challenges with Augmented Reality Applications on Smartphones in MOBILE WIRELESS MIDDLEWARE, OPERATING SYSTEMS, AND APPLICATIONS
  • 2007-12-11. Modeling the World from Internet Photo Collections in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2013. Improving Image-Based Localization through Increasing Correct Feature Correspondences in ADVANCES IN VISUAL COMPUTING
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s13678-016-0005-0

    DOI

    http://dx.doi.org/10.1186/s13678-016-0005-0

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1004728865


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0806", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information Systems", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, USA", 
              "id": "http://www.grid.ac/institutes/grid.438526.e", 
              "name": [
                "Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Bae", 
            "givenName": "Hyojoon", 
            "id": "sg:person.012462130215.47", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012462130215.47"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA", 
              "id": "http://www.grid.ac/institutes/grid.152326.1", 
              "name": [
                "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Walker", 
            "givenName": "Michael", 
            "id": "sg:person.013257510615.69", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013257510615.69"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA", 
              "id": "http://www.grid.ac/institutes/grid.152326.1", 
              "name": [
                "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "White", 
            "givenName": "Jules", 
            "id": "sg:person.011626113163.36", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011626113163.36"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA", 
              "id": "http://www.grid.ac/institutes/grid.152326.1", 
              "name": [
                "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Pan", 
            "givenName": "Yao", 
            "id": "sg:person.016255777447.97", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016255777447.97"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA", 
              "id": "http://www.grid.ac/institutes/grid.152326.1", 
              "name": [
                "Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Sun", 
            "givenName": "Yu", 
            "id": "sg:person.011145667247.60", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011145667247.60"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Civil and Environmental Engineering and the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, USA", 
              "id": "http://www.grid.ac/institutes/grid.35403.31", 
              "name": [
                "Department of Civil and Environmental Engineering and the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Golparvar-Fard", 
            "givenName": "Mani", 
            "id": "sg:person.011217127065.55", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011217127065.55"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/3-540-44480-7_21", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021371683", 
              "https://doi.org/10.1007/3-540-44480-7_21"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/2213-7459-1-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1022951741", 
              "https://doi.org/10.1186/2213-7459-1-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-17758-3_10", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016089420", 
              "https://doi.org/10.1007/978-3-642-17758-3_10"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11957959_4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052186402", 
              "https://doi.org/10.1007/11957959_4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-15561-1_27", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020939024", 
              "https://doi.org/10.1007/978-3-642-15561-1_27"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/2213-7459-1-1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1006017093", 
              "https://doi.org/10.1186/2213-7459-1-1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-34091-8_9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1018624838", 
              "https://doi.org/10.1007/978-3-642-34091-8_9"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-007-0107-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021569711", 
              "https://doi.org/10.1007/s11263-007-0107-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-41914-0_31", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016830637", 
              "https://doi.org/10.1007/978-3-642-41914-0_31"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2016-07-19", 
        "datePublishedReg": "2016-07-19", 
        "description": "A key problem in mobile computing is providing people access to cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that addresses this problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects\u2019 imagery. This paper presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user\u2019s position. Rather, the user\u2019s 3D location and orientation are automatically and purely derived by comparing images from the user\u2019s mobile device to a 3D point cloud model generated from a set of pre-collected photographs. Our approach supports content authoring where collaboration on editing the content stored in the 3D cloud is possible and content added by one user can be immediately accessible by others. In addition, a key challenge of scalability for mobile augmented reality is addressed in this paper. In general, mobile augmented reality is required to work regardless of users\u2019 location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing a multi-model based direct 2D-to-3D matching algorithms for localization, as well as applying a caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users\u2019 location, size of physical objects, and number of cyber-physical information items. Empirical results presented in the paper show that the approach can provide millimeter-level augmented reality across several hundred or thousand objects without the need for additional non-imagery sensor inputs.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s13678-016-0005-0", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1135900", 
            "issn": [
              "2196-873X"
            ], 
            "name": "mUX: The Journal of Mobile User Experience", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "5"
          }
        ], 
        "keywords": [
          "cyber-information", 
          "real-world physical objects", 
          "mobile devices", 
          "physical objects", 
          "user location", 
          "context-aware approach", 
          "user's mobile device", 
          "real-time localization", 
          "point cloud model", 
          "mobile computing", 
          "reality system", 
          "tracking module", 
          "user position", 
          "sensor inputs", 
          "size of objects", 
          "information items", 
          "scalable structure", 
          "mobile", 
          "hardware attachment", 
          "key problem", 
          "users", 
          "cloud model", 
          "key challenges", 
          "objects", 
          "fiducial markers", 
          "Information Association", 
          "computing", 
          "paper show", 
          "queries", 
          "scalability", 
          "empirical results", 
          "reality", 
          "algorithm", 
          "imagery", 
          "cloud", 
          "challenges", 
          "devices", 
          "images", 
          "people's access", 
          "module", 
          "scheme", 
          "set", 
          "location", 
          "environment", 
          "access", 
          "collaboration", 
          "input", 
          "system", 
          "entities", 
          "localization", 
          "technique", 
          "terms", 
          "number", 
          "top", 
          "need", 
          "model", 
          "position", 
          "motion", 
          "items", 
          "physical scales", 
          "research", 
          "show", 
          "size", 
          "content", 
          "results", 
          "total number", 
          "photographs", 
          "scale", 
          "structure", 
          "orientation", 
          "addition", 
          "association", 
          "attachment", 
          "markers", 
          "paper", 
          "approach", 
          "problem", 
          "new vision-based context-aware approach", 
          "vision-based context-aware approach", 
          "rich 3D cyber-information", 
          "top of imagery", 
          "RF-based location tracking modules", 
          "location tracking modules", 
          "external hardware attachments", 
          "user\u2019s 3D location", 
          "pre-collected photographs", 
          "cyber-information scale", 
          "cyber-information entities", 
          "direct 2D-to-3D", 
          "cyber-physical information items", 
          "additional non-imagery sensor inputs", 
          "non-imagery sensor inputs", 
          "high-precision mobile"
        ], 
        "name": "Fast and scalable structure-from-motion based localization for high-precision mobile augmented reality systems", 
        "pagination": "4", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1004728865"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s13678-016-0005-0"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s13678-016-0005-0", 
          "https://app.dimensions.ai/details/publication/pub.1004728865"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2021-12-01T19:38", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20211201/entities/gbq_results/article/article_708.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s13678-016-0005-0"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s13678-016-0005-0'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s13678-016-0005-0'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s13678-016-0005-0'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s13678-016-0005-0'


     

    This table displays all metadata directly associated to this object as RDF triples.

    231 TRIPLES      22 PREDICATES      128 URIs      110 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s13678-016-0005-0 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 anzsrc-for:0806
    4 schema:author Nccb641dd4300426f8cf9fd4afadb691c
    5 schema:citation sg:pub.10.1007/11957959_4
    6 sg:pub.10.1007/3-540-44480-7_21
    7 sg:pub.10.1007/978-3-642-15561-1_27
    8 sg:pub.10.1007/978-3-642-17758-3_10
    9 sg:pub.10.1007/978-3-642-34091-8_9
    10 sg:pub.10.1007/978-3-642-41914-0_31
    11 sg:pub.10.1007/s11263-007-0107-3
    12 sg:pub.10.1186/2213-7459-1-1
    13 sg:pub.10.1186/2213-7459-1-3
    14 schema:datePublished 2016-07-19
    15 schema:datePublishedReg 2016-07-19
    16 schema:description A key problem in mobile computing is providing people access to cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that addresses this problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects’ imagery. This paper presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user’s position. Rather, the user’s 3D location and orientation are automatically and purely derived by comparing images from the user’s mobile device to a 3D point cloud model generated from a set of pre-collected photographs. Our approach supports content authoring where collaboration on editing the content stored in the 3D cloud is possible and content added by one user can be immediately accessible by others. In addition, a key challenge of scalability for mobile augmented reality is addressed in this paper. In general, mobile augmented reality is required to work regardless of users’ location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing a multi-model based direct 2D-to-3D matching algorithms for localization, as well as applying a caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users’ location, size of physical objects, and number of cyber-physical information items. Empirical results presented in the paper show that the approach can provide millimeter-level augmented reality across several hundred or thousand objects without the need for additional non-imagery sensor inputs.
    17 schema:genre article
    18 schema:inLanguage en
    19 schema:isAccessibleForFree true
    20 schema:isPartOf N6524d28f651943cfbdf71bdff7c964e5
    21 Ne9f1720f06d14732a1d7bf3359c8da47
    22 sg:journal.1135900
    23 schema:keywords Information Association
    24 RF-based location tracking modules
    25 access
    26 addition
    27 additional non-imagery sensor inputs
    28 algorithm
    29 approach
    30 association
    31 attachment
    32 challenges
    33 cloud
    34 cloud model
    35 collaboration
    36 computing
    37 content
    38 context-aware approach
    39 cyber-information
    40 cyber-information entities
    41 cyber-information scale
    42 cyber-physical information items
    43 devices
    44 direct 2D-to-3D
    45 empirical results
    46 entities
    47 environment
    48 external hardware attachments
    49 fiducial markers
    50 hardware attachment
    51 high-precision mobile
    52 imagery
    53 images
    54 information items
    55 input
    56 items
    57 key challenges
    58 key problem
    59 localization
    60 location
    61 location tracking modules
    62 markers
    63 mobile
    64 mobile computing
    65 mobile devices
    66 model
    67 module
    68 motion
    69 need
    70 new vision-based context-aware approach
    71 non-imagery sensor inputs
    72 number
    73 objects
    74 orientation
    75 paper
    76 paper show
    77 people's access
    78 photographs
    79 physical objects
    80 physical scales
    81 point cloud model
    82 position
    83 pre-collected photographs
    84 problem
    85 queries
    86 real-time localization
    87 real-world physical objects
    88 reality
    89 reality system
    90 research
    91 results
    92 rich 3D cyber-information
    93 scalability
    94 scalable structure
    95 scale
    96 scheme
    97 sensor inputs
    98 set
    99 show
    100 size
    101 size of objects
    102 structure
    103 system
    104 technique
    105 terms
    106 top
    107 top of imagery
    108 total number
    109 tracking module
    110 user location
    111 user position
    112 user's mobile device
    113 users
    114 user’s 3D location
    115 vision-based context-aware approach
    116 schema:name Fast and scalable structure-from-motion based localization for high-precision mobile augmented reality systems
    117 schema:pagination 4
    118 schema:productId N343fbbdb07834c578cee83f501d1eedd
    119 N5589b73108d84ad4bea2371c2da95185
    120 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004728865
    121 https://doi.org/10.1186/s13678-016-0005-0
    122 schema:sdDatePublished 2021-12-01T19:38
    123 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    124 schema:sdPublisher N461ae1e97d78488984f3494632728789
    125 schema:url https://doi.org/10.1186/s13678-016-0005-0
    126 sgo:license sg:explorer/license/
    127 sgo:sdDataset articles
    128 rdf:type schema:ScholarlyArticle
    129 N158f98062f284cceb4cb8729fd771b01 rdf:first sg:person.016255777447.97
    130 rdf:rest Nb9208953059e4ab58031096c5dbc029a
    131 N343fbbdb07834c578cee83f501d1eedd schema:name doi
    132 schema:value 10.1186/s13678-016-0005-0
    133 rdf:type schema:PropertyValue
    134 N461ae1e97d78488984f3494632728789 schema:name Springer Nature - SN SciGraph project
    135 rdf:type schema:Organization
    136 N5589b73108d84ad4bea2371c2da95185 schema:name dimensions_id
    137 schema:value pub.1004728865
    138 rdf:type schema:PropertyValue
    139 N6524d28f651943cfbdf71bdff7c964e5 schema:volumeNumber 5
    140 rdf:type schema:PublicationVolume
    141 Nb9208953059e4ab58031096c5dbc029a rdf:first sg:person.011145667247.60
    142 rdf:rest Ned17541e112145869b37645114fecf6f
    143 Nbf0f99ffec724d2caa701d89d8669af8 rdf:first sg:person.011626113163.36
    144 rdf:rest N158f98062f284cceb4cb8729fd771b01
    145 Nccb641dd4300426f8cf9fd4afadb691c rdf:first sg:person.012462130215.47
    146 rdf:rest Neb8845c54ee644efb672e4deafb4344c
    147 Ne9f1720f06d14732a1d7bf3359c8da47 schema:issueNumber 1
    148 rdf:type schema:PublicationIssue
    149 Neb8845c54ee644efb672e4deafb4344c rdf:first sg:person.013257510615.69
    150 rdf:rest Nbf0f99ffec724d2caa701d89d8669af8
    151 Ned17541e112145869b37645114fecf6f rdf:first sg:person.011217127065.55
    152 rdf:rest rdf:nil
    153 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    154 schema:name Information and Computing Sciences
    155 rdf:type schema:DefinedTerm
    156 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    157 schema:name Artificial Intelligence and Image Processing
    158 rdf:type schema:DefinedTerm
    159 anzsrc-for:0806 schema:inDefinedTermSet anzsrc-for:
    160 schema:name Information Systems
    161 rdf:type schema:DefinedTerm
    162 sg:journal.1135900 schema:issn 2196-873X
    163 schema:name mUX: The Journal of Mobile User Experience
    164 schema:publisher Springer Nature
    165 rdf:type schema:Periodical
    166 sg:person.011145667247.60 schema:affiliation grid-institutes:grid.152326.1
    167 schema:familyName Sun
    168 schema:givenName Yu
    169 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011145667247.60
    170 rdf:type schema:Person
    171 sg:person.011217127065.55 schema:affiliation grid-institutes:grid.35403.31
    172 schema:familyName Golparvar-Fard
    173 schema:givenName Mani
    174 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011217127065.55
    175 rdf:type schema:Person
    176 sg:person.011626113163.36 schema:affiliation grid-institutes:grid.152326.1
    177 schema:familyName White
    178 schema:givenName Jules
    179 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011626113163.36
    180 rdf:type schema:Person
    181 sg:person.012462130215.47 schema:affiliation grid-institutes:grid.438526.e
    182 schema:familyName Bae
    183 schema:givenName Hyojoon
    184 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012462130215.47
    185 rdf:type schema:Person
    186 sg:person.013257510615.69 schema:affiliation grid-institutes:grid.152326.1
    187 schema:familyName Walker
    188 schema:givenName Michael
    189 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013257510615.69
    190 rdf:type schema:Person
    191 sg:person.016255777447.97 schema:affiliation grid-institutes:grid.152326.1
    192 schema:familyName Pan
    193 schema:givenName Yao
    194 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016255777447.97
    195 rdf:type schema:Person
    196 sg:pub.10.1007/11957959_4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052186402
    197 https://doi.org/10.1007/11957959_4
    198 rdf:type schema:CreativeWork
    199 sg:pub.10.1007/3-540-44480-7_21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021371683
    200 https://doi.org/10.1007/3-540-44480-7_21
    201 rdf:type schema:CreativeWork
    202 sg:pub.10.1007/978-3-642-15561-1_27 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020939024
    203 https://doi.org/10.1007/978-3-642-15561-1_27
    204 rdf:type schema:CreativeWork
    205 sg:pub.10.1007/978-3-642-17758-3_10 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016089420
    206 https://doi.org/10.1007/978-3-642-17758-3_10
    207 rdf:type schema:CreativeWork
    208 sg:pub.10.1007/978-3-642-34091-8_9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018624838
    209 https://doi.org/10.1007/978-3-642-34091-8_9
    210 rdf:type schema:CreativeWork
    211 sg:pub.10.1007/978-3-642-41914-0_31 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016830637
    212 https://doi.org/10.1007/978-3-642-41914-0_31
    213 rdf:type schema:CreativeWork
    214 sg:pub.10.1007/s11263-007-0107-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021569711
    215 https://doi.org/10.1007/s11263-007-0107-3
    216 rdf:type schema:CreativeWork
    217 sg:pub.10.1186/2213-7459-1-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1006017093
    218 https://doi.org/10.1186/2213-7459-1-1
    219 rdf:type schema:CreativeWork
    220 sg:pub.10.1186/2213-7459-1-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022951741
    221 https://doi.org/10.1186/2213-7459-1-3
    222 rdf:type schema:CreativeWork
    223 grid-institutes:grid.152326.1 schema:alternateName Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
    224 schema:name Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
    225 rdf:type schema:Organization
    226 grid-institutes:grid.35403.31 schema:alternateName Department of Civil and Environmental Engineering and the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, USA
    227 schema:name Department of Civil and Environmental Engineering and the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, USA
    228 rdf:type schema:Organization
    229 grid-institutes:grid.438526.e schema:alternateName Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, USA
    230 schema:name Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, USA
    231 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...