Method For Temporally And Spatially Integrating And Managing A Plurality Of Videos, Device Used For The Same, And Recording Medium ...


Ontology type: sgo:Patent     


Patent Info

DATE

N/A

AUTHORS

AKUTSU, AKIHITO , HAMADA, HIROSHI , TONOMURA, YOSHINOBU

ABSTRACT

A photographing state detecting section (103) reads a picture data row and detects camera on/off information and camera operation information. A video dividing section (104) divides videos at every shot based on the camera on/off information and an object/background separating section (105) separates an object from the background based on the camera operation information. An object movement information extracting section (106) correlates object information between frames and a photographing space resynthesizing section (107) resynthesizes a photographing space from the camera operation information and the background. Then an inter-shot relation calculating section (108) calculates the spatial relation between a plurality of resynthesized photographing spaces. By temporally and spatially managing and accumulating the above-mentioned camera on/off information, camera operation information, object information, object movement information, resynthesized background information, and inter-shot relation information, one or more photographing spaces and objects are resynthesized and displayed or outputted in response to a request, etc., from a user. More... »

Related SciGraph Publications

  • 1987-03. Epipolar-plane image analysis: An approach to determining structure from motion in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "name": "AKUTSU, AKIHITO", 
            "type": "Person"
          }, 
          {
            "name": "HAMADA, HIROSHI", 
            "type": "Person"
          }, 
          {
            "name": "TONOMURA, YOSHINOBU", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/bf00128525", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049897588", 
              "https://doi.org/10.1007/bf00128525"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00128525", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049897588", 
              "https://doi.org/10.1007/bf00128525"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "description": "

    A photographing state detecting section (103) reads a picture data row and detects camera on/off information and camera operation\ninformation. A video dividing section (104) divides videos at every shot based on the camera on/off information and an object/background\nseparating section (105) separates an object from the background based on the camera operation information. An object movement\ninformation extracting section (106) correlates object information between frames and a photographing space resynthesizing section (107)\nresynthesizes a photographing space from the camera operation information and the background. Then an inter-shot relation calculating\nsection (108) calculates the spatial relation between a plurality of resynthesized photographing spaces. By temporally and spatially managing\nand accumulating the above-mentioned camera on/off information, camera operation information, object information, object movement\ninformation, resynthesized background information, and inter-shot relation information, one or more photographing spaces and objects are\nresynthesized and displayed or outputted in response to a request, etc., from a user.\n

    ", "id": "sg:patent.EP-0866606-A1", "keywords": [ "Integrating", "video", "recording medium", "state", "section", "picture", "camera", "operation", "shot", "extracting", "object information", "frame", "calculating", "spatial relation", "plurality", "movement", "background information", "relation", "request", "user" ], "name": "METHOD FOR TEMPORALLY AND SPATIALLY INTEGRATING AND MANAGING A PLURALITY OF VIDEOS, DEVICE USED FOR THE SAME, AND RECORDING MEDIUM STORING PROGRAM OF THE METHOD", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.419819.c", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/EP-0866606-A1" ], "sdDataset": "patents", "sdDatePublished": "2019-03-07T15:35", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com.uberresearch.data.dev.patents-pipeline/full_run_10/sn-export/5eb3e5a348d7f117b22cc85fb0b02730/0000100128-0000348334/json_export_b670b59c.jsonl", "type": "Patent" } ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.EP-0866606-A1'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.EP-0866606-A1'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.EP-0866606-A1'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.EP-0866606-A1'


     

    This table displays all metadata directly associated to this object as RDF triples.

    53 TRIPLES      14 PREDICATES      34 URIs      27 LITERALS      2 BLANK NODES

    Subject Predicate Object
    1 sg:patent.EP-0866606-A1 schema:about anzsrc-for:2746
    2 schema:author Nc8d014bf490b4a529ba6770773e3888c
    3 schema:citation sg:pub.10.1007/bf00128525
    4 schema:description <p>A photographing state detecting section (103) reads a picture data row and detects camera on/off information and camera operation information. A video dividing section (104) divides videos at every shot based on the camera on/off information and an object/background separating section (105) separates an object from the background based on the camera operation information. An object movement information extracting section (106) correlates object information between frames and a photographing space resynthesizing section (107) resynthesizes a photographing space from the camera operation information and the background. Then an inter-shot relation calculating section (108) calculates the spatial relation between a plurality of resynthesized photographing spaces. By temporally and spatially managing and accumulating the above-mentioned camera on/off information, camera operation information, object information, object movement information, resynthesized background information, and inter-shot relation information, one or more photographing spaces and objects are resynthesized and displayed or outputted in response to a request, etc., from a user. <img file="00000001.tif" id="img-00000001" he="78" wi="82" img-format="tif" img-content="ad"/></p>
    5 schema:keywords Integrating
    6 background information
    7 calculating
    8 camera
    9 extracting
    10 frame
    11 movement
    12 object information
    13 operation
    14 picture
    15 plurality
    16 recording medium
    17 relation
    18 request
    19 section
    20 shot
    21 spatial relation
    22 state
    23 user
    24 video
    25 schema:name METHOD FOR TEMPORALLY AND SPATIALLY INTEGRATING AND MANAGING A PLURALITY OF VIDEOS, DEVICE USED FOR THE SAME, AND RECORDING MEDIUM STORING PROGRAM OF THE METHOD
    26 schema:recipient https://www.grid.ac/institutes/grid.419819.c
    27 schema:sameAs https://app.dimensions.ai/details/patent/EP-0866606-A1
    28 schema:sdDatePublished 2019-03-07T15:35
    29 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    30 schema:sdPublisher Nd3dfd923f5164c8ebe938747d81f0ddb
    31 sgo:license sg:explorer/license/
    32 sgo:sdDataset patents
    33 rdf:type sgo:Patent
    34 N2b9bbe447b344e85a09881645148e8be rdf:first N94bbfced61554d80b631ca30af3276de
    35 rdf:rest rdf:nil
    36 N86ee1b93d5c641c684e33a3979b23212 schema:name AKUTSU, AKIHITO
    37 rdf:type schema:Person
    38 N94bbfced61554d80b631ca30af3276de schema:name TONOMURA, YOSHINOBU
    39 rdf:type schema:Person
    40 Na535a82aed704c49aab2f565b50105cf rdf:first Nab8cc5ba866948648c24644f719c340b
    41 rdf:rest N2b9bbe447b344e85a09881645148e8be
    42 Nab8cc5ba866948648c24644f719c340b schema:name HAMADA, HIROSHI
    43 rdf:type schema:Person
    44 Nc8d014bf490b4a529ba6770773e3888c rdf:first N86ee1b93d5c641c684e33a3979b23212
    45 rdf:rest Na535a82aed704c49aab2f565b50105cf
    46 Nd3dfd923f5164c8ebe938747d81f0ddb schema:name Springer Nature - SN SciGraph project
    47 rdf:type schema:Organization
    48 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
    49 rdf:type schema:DefinedTerm
    50 sg:pub.10.1007/bf00128525 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049897588
    51 https://doi.org/10.1007/bf00128525
    52 rdf:type schema:CreativeWork
    53 https://www.grid.ac/institutes/grid.419819.c schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...