Method For Temporally And Spatially Integrating And Managing A Plurality Of Videos, Device Used For The Same, And Recording Medium ...


Ontology type: sgo:Patent     


Patent Info

DATE

N/A

AUTHORS

AKUTSU AKIHITO , TONOMURA YOSHINOBU , HAMADA HIROSHI

ABSTRACT

A photographing state detecting section (103) reads a picture data row and detects camera on/off information and camera operation information. A video dividing section (104) divides videos at every shot based on the camera on/off information and an object/background separating section (105) separates an object from the background based on the camera operation information. An object movement information extracting section (106) correlates object information between frames and a photographing space resynthesizing section (107) resynthesizes a photographing space from the camera operation information and the background. Then an inter-shot relation calculating section (108) calculates the spatial relation between a plurality of resynthesized photographing spaces. By temporally and spatially managing and accumulating the above-mentioned camera on/off information, camera operation information, object information, object movement information, resynthesized background information, and inter-shot relation information, one or more photographing spaces and objects are resynthesized and displayed or outputted in response to a request, etc., from a user. More... »

Related SciGraph Publications

  • 1987-03. Epipolar-plane image analysis: An approach to determining structure from motion in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "name": "AKUTSU AKIHITO", 
            "type": "Person"
          }, 
          {
            "name": "TONOMURA YOSHINOBU", 
            "type": "Person"
          }, 
          {
            "name": "HAMADA HIROSHI", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/bf00128525", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049897588", 
              "https://doi.org/10.1007/bf00128525"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00128525", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049897588", 
              "https://doi.org/10.1007/bf00128525"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "description": "

    A photographing state detecting section (103) reads a picture data row and detects camera on/off information and camera operation information. A video dividing section (104) divides videos at every shot based on the camera on/off information and an object/background separating section (105) separates an object from the background based on the camera operation information. An object movement information extracting section (106) correlates object information between frames and a photographing space resynthesizing section (107) resynthesizes a photographing space from the camera operation information and the background. Then an inter-shot relation calculating section (108) calculates the spatial relation between a plurality of resynthesized photographing spaces. By temporally and spatially managing and accumulating the above-mentioned camera on/off information, camera operation information, object information, object movement information, resynthesized background information, and inter-shot relation information, one or more photographing spaces and objects are resynthesized and displayed or outputted in response to a request, etc., from a user.

    ", "id": "sg:patent.EP-0866606-A4", "keywords": [ "Integrating", "video", "recording medium", "state", "section", "picture", "camera", "operation", "shot", "extracting", "object information", "frame", "calculating", "spatial relation", "plurality", "movement", "background information", "relation", "request", "user" ], "name": "METHOD FOR TEMPORALLY AND SPATIALLY INTEGRATING AND MANAGING A PLURALITY OF VIDEOS, DEVICE USED FOR THE SAME, AND RECORDING MEDIUM STORING PROGRAM OF THE METHOD", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.419819.c", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/EP-0866606-A4" ], "sdDataset": "patents", "sdDatePublished": "2019-03-07T15:35", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com.uberresearch.data.dev.patents-pipeline/full_run_10/sn-export/5eb3e5a348d7f117b22cc85fb0b02730/0000100128-0000348334/json_export_b670b59c.jsonl", "type": "Patent" } ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.EP-0866606-A4'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.EP-0866606-A4'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.EP-0866606-A4'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.EP-0866606-A4'


     

    This table displays all metadata directly associated to this object as RDF triples.

    53 TRIPLES      14 PREDICATES      34 URIs      27 LITERALS      2 BLANK NODES

    Subject Predicate Object
    1 sg:patent.EP-0866606-A4 schema:about anzsrc-for:2746
    2 schema:author Nfd6a8db9fa0a483cb4d20830b5bf1027
    3 schema:citation sg:pub.10.1007/bf00128525
    4 schema:description <p>A photographing state detecting section (103) reads a picture data row and detects camera on/off information and camera operation information. A video dividing section (104) divides videos at every shot based on the camera on/off information and an object/background separating section (105) separates an object from the background based on the camera operation information. An object movement information extracting section (106) correlates object information between frames and a photographing space resynthesizing section (107) resynthesizes a photographing space from the camera operation information and the background. Then an inter-shot relation calculating section (108) calculates the spatial relation between a plurality of resynthesized photographing spaces. By temporally and spatially managing and accumulating the above-mentioned camera on/off information, camera operation information, object information, object movement information, resynthesized background information, and inter-shot relation information, one or more photographing spaces and objects are resynthesized and displayed or outputted in response to a request, etc., from a user.</p>
    5 schema:keywords Integrating
    6 background information
    7 calculating
    8 camera
    9 extracting
    10 frame
    11 movement
    12 object information
    13 operation
    14 picture
    15 plurality
    16 recording medium
    17 relation
    18 request
    19 section
    20 shot
    21 spatial relation
    22 state
    23 user
    24 video
    25 schema:name METHOD FOR TEMPORALLY AND SPATIALLY INTEGRATING AND MANAGING A PLURALITY OF VIDEOS, DEVICE USED FOR THE SAME, AND RECORDING MEDIUM STORING PROGRAM OF THE METHOD
    26 schema:recipient https://www.grid.ac/institutes/grid.419819.c
    27 schema:sameAs https://app.dimensions.ai/details/patent/EP-0866606-A4
    28 schema:sdDatePublished 2019-03-07T15:35
    29 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    30 schema:sdPublisher Nfb45c775bc4449d5a4844f98e84071d0
    31 sgo:license sg:explorer/license/
    32 sgo:sdDataset patents
    33 rdf:type sgo:Patent
    34 N419a4e66cc70494cb82571579385e229 rdf:first Nae95720edeb24112832b4f405e529991
    35 rdf:rest N63210ceab41145e09cdca93fc4233d42
    36 N63210ceab41145e09cdca93fc4233d42 rdf:first Naa0ef695075c4b7c97766e982a035ae7
    37 rdf:rest rdf:nil
    38 Naa0ef695075c4b7c97766e982a035ae7 schema:name HAMADA HIROSHI
    39 rdf:type schema:Person
    40 Nae95720edeb24112832b4f405e529991 schema:name TONOMURA YOSHINOBU
    41 rdf:type schema:Person
    42 Nf4fc01efc59f43e0bc62f3559da503cd schema:name AKUTSU AKIHITO
    43 rdf:type schema:Person
    44 Nfb45c775bc4449d5a4844f98e84071d0 schema:name Springer Nature - SN SciGraph project
    45 rdf:type schema:Organization
    46 Nfd6a8db9fa0a483cb4d20830b5bf1027 rdf:first Nf4fc01efc59f43e0bc62f3559da503cd
    47 rdf:rest N419a4e66cc70494cb82571579385e229
    48 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
    49 rdf:type schema:DefinedTerm
    50 sg:pub.10.1007/bf00128525 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049897588
    51 https://doi.org/10.1007/bf00128525
    52 rdf:type schema:CreativeWork
    53 https://www.grid.ac/institutes/grid.419819.c schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...