Dynamic selection of surfaces in real world for projection of information thereon


Ontology type: sgo:Patent     


Patent Info

DATE

N/A

AUTHORS

Tejas Dattatraya Kulkarni

ABSTRACT

One or more devices capture a scene of real world, and process one or more image(s) which include distances to points on surfaces in the real world. The distances are used to automatically identify a set of surfaces in the real world. Then, the one or more devices check whether a surface in the set is suitable for display of an element of information to be projected into the scene. On finding that a surface is suitable, a transform function is automatically identified, followed by automatic application of the transform function to the element of the information. A transformed element, which results from automatically applying the transform function, is stored in a frame buffer coupled to a projector, at a specific position in the frame buffer identified during the check for suitability. When no surface is suitable, user input is obtained, followed by projection of information as per user input. More... »

Related SciGraph Publications

  • 2004-11. Distinctive Image Features from Scale-Invariant Keypoints in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2004-05. Robust Real-Time Face Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "name": "Tejas Dattatraya Kulkarni", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1023/b:visi.0000013087.49260.fb", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1001944608", 
              "https://doi.org/10.1023/b:visi.0000013087.49260.fb"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:visi.0000029664.99615.94", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052687286", 
              "https://doi.org/10.1023/b:visi.0000029664.99615.94"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/5254.708428", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061186227"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.20965/jrm.2009.p0726", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1068822820"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "description": "

    One or more devices capture a scene of real world, and process one or more image(s) which include distances to points on surfaces in the real world. The distances are used to automatically identify a set of surfaces in the real world. Then, the one or more devices check whether a surface in the set is suitable for display of an element of information to be projected into the scene. On finding that a surface is suitable, a transform function is automatically identified, followed by automatic application of the transform function to the element of the information. A transformed element, which results from automatically applying the transform function, is stored in a frame buffer coupled to a projector, at a specific position in the frame buffer identified during the check for suitability. When no surface is suitable, user input is obtained, followed by projection of information as per user input.

    ", "id": "sg:patent.US-9245193-B2", "keywords": [ "selection", "surface", "real world", "projection", "Equipment and Supply", "scene", "distance", "check", "element", "buffer", "projector", "specific position", "suitability", "user input" ], "name": "Dynamic selection of surfaces in real world for projection of information thereon", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.430388.4", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/US-9245193-B2" ], "sdDataset": "patents", "sdDatePublished": "2019-03-07T15:37", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com.uberresearch.data.dev.patents-pipeline/full_run_10/sn-export/5eb3e5a348d7f117b22cc85fb0b02730/0000100128-0000348334/json_export_e5ffef63.jsonl", "type": "Patent" } ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.US-9245193-B2'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.US-9245193-B2'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.US-9245193-B2'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.US-9245193-B2'


     

    This table displays all metadata directly associated to this object as RDF triples.

    49 TRIPLES      14 PREDICATES      31 URIs      21 LITERALS      2 BLANK NODES

    Subject Predicate Object
    1 sg:patent.US-9245193-B2 schema:about anzsrc-for:2746
    2 schema:author N9671275417f44882b5a5c43d6f293428
    3 schema:citation sg:pub.10.1023/b:visi.0000013087.49260.fb
    4 sg:pub.10.1023/b:visi.0000029664.99615.94
    5 https://doi.org/10.1109/5254.708428
    6 https://doi.org/10.20965/jrm.2009.p0726
    7 schema:description <p id="p-0001" num="0000">One or more devices capture a scene of real world, and process one or more image(s) which include distances to points on surfaces in the real world. The distances are used to automatically identify a set of surfaces in the real world. Then, the one or more devices check whether a surface in the set is suitable for display of an element of information to be projected into the scene. On finding that a surface is suitable, a transform function is automatically identified, followed by automatic application of the transform function to the element of the information. A transformed element, which results from automatically applying the transform function, is stored in a frame buffer coupled to a projector, at a specific position in the frame buffer identified during the check for suitability. When no surface is suitable, user input is obtained, followed by projection of information as per user input.</p>
    8 schema:keywords Equipment and Supply
    9 buffer
    10 check
    11 distance
    12 element
    13 projection
    14 projector
    15 real world
    16 scene
    17 selection
    18 specific position
    19 suitability
    20 surface
    21 user input
    22 schema:name Dynamic selection of surfaces in real world for projection of information thereon
    23 schema:recipient https://www.grid.ac/institutes/grid.430388.4
    24 schema:sameAs https://app.dimensions.ai/details/patent/US-9245193-B2
    25 schema:sdDatePublished 2019-03-07T15:37
    26 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    27 schema:sdPublisher Nfa76abff16854c91a818ff317e935d2c
    28 sgo:license sg:explorer/license/
    29 sgo:sdDataset patents
    30 rdf:type sgo:Patent
    31 N8872a48505d34bb1a2d35196c92d71b1 schema:name Tejas Dattatraya Kulkarni
    32 rdf:type schema:Person
    33 N9671275417f44882b5a5c43d6f293428 rdf:first N8872a48505d34bb1a2d35196c92d71b1
    34 rdf:rest rdf:nil
    35 Nfa76abff16854c91a818ff317e935d2c schema:name Springer Nature - SN SciGraph project
    36 rdf:type schema:Organization
    37 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
    38 rdf:type schema:DefinedTerm
    39 sg:pub.10.1023/b:visi.0000013087.49260.fb schema:sameAs https://app.dimensions.ai/details/publication/pub.1001944608
    40 https://doi.org/10.1023/b:visi.0000013087.49260.fb
    41 rdf:type schema:CreativeWork
    42 sg:pub.10.1023/b:visi.0000029664.99615.94 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052687286
    43 https://doi.org/10.1023/b:visi.0000029664.99615.94
    44 rdf:type schema:CreativeWork
    45 https://doi.org/10.1109/5254.708428 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061186227
    46 rdf:type schema:CreativeWork
    47 https://doi.org/10.20965/jrm.2009.p0726 schema:sameAs https://app.dimensions.ai/details/publication/pub.1068822820
    48 rdf:type schema:CreativeWork
    49 https://www.grid.ac/institutes/grid.430388.4 schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...