Unsupervised Identification of Multiple Objects of Interest from Multiple Images: dISCOVER View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2007

AUTHORS

Devi Parikh , Tsuhan Chen

ABSTRACT

Given a collection of images of offices, what would we say we see in the images? The objects of interest are likely to be monitors, keyboards, phones, etc. Such identification of the foreground in a scene is important to avoid distractions caused by background clutter and facilitates better understanding of the scene. It is crucial for such an identification to be unsupervised to avoid extensive human labeling as well as biases induced by human intervention. Most interesting scenes contain multiple objects of interest. Hence, it would be useful to separate the foreground into the multiple objects it contains. We propose dISCOVER, an unsupervised approach to identifying the multiple objects of interest in a scene from a collection of images. In order to achieve this, it exploits the consistency in foreground objects - in terms of occurrence and geometry - across the multiple images of the scene. More... »

PAGES

487-496

References to SciGraph publications

  • 2004-11. Distinctive Image Features from Scale-Invariant Keypoints in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2001-01. Unsupervised Learning by Probabilistic Latent Semantic Analysis in MACHINE LEARNING
  • 2002. On Affine Invariant Clustering and Automatic Cast Listing in Movies in COMPUTER VISION — ECCV 2002
  • 2006. A Robust Approach for Object Recognition in ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2006
  • 2000. Unsupervised Learning of Models for Recognition in COMPUTER VISION - ECCV 2000
  • Book

    TITLE

    Computer Vision – ACCV 2007

    ISBN

    978-3-540-76389-5

    Author Affiliations

    Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/978-3-540-76390-1_48

    DOI

    http://dx.doi.org/10.1007/978-3-540-76390-1_48

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1014060599


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology and Cognitive Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Carnegie Mellon University", 
              "id": "https://www.grid.ac/institutes/grid.147455.6", 
              "name": [
                "Carnegie Mellon University"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Parikh", 
            "givenName": "Devi", 
            "id": "sg:person.01310632454.16", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01310632454.16"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Carnegie Mellon University", 
              "id": "https://www.grid.ac/institutes/grid.147455.6", 
              "name": [
                "Carnegie Mellon University"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Tsuhan", 
            "id": "sg:person.012245072625.31", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012245072625.31"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/3-540-47977-5_20", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1003775768", 
              "https://doi.org/10.1007/3-540-47977-5_20"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/3-540-45054-8_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014228158", 
              "https://doi.org/10.1007/3-540-45054-8_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11922162_31", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014348238", 
              "https://doi.org/10.1007/11922162_31"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11922162_31", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014348238", 
              "https://doi.org/10.1007/11922162_31"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/a:1007617005950", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016248609", 
              "https://doi.org/10.1023/a:1007617005950"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:visi.0000029664.99615.94", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052687286", 
              "https://doi.org/10.1023/b:visi.0000029664.99615.94"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2001.990529", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093171820"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvprw.2006.192", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093480652"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2003.1211479", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093624919"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2004.1315253", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093697819"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2005.77", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094132829"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2004.1315071", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094292261"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2005.152", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094301320"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2006.68", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094512911"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2006.288", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094648993"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2005.142", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094700637"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2005.20", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094741769"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2006.326", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095068040"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2005.16", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095244523"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2005.16", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095244523"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2007", 
        "datePublishedReg": "2007-01-01", 
        "description": "Given a collection of images of offices, what would we say we see in the images? The objects of interest are likely to be monitors, keyboards, phones, etc. Such identification of the foreground in a scene is important to avoid distractions caused by background clutter and facilitates better understanding of the scene. It is crucial for such an identification to be unsupervised to avoid extensive human labeling as well as biases induced by human intervention. Most interesting scenes contain multiple objects of interest. Hence, it would be useful to separate the foreground into the multiple objects it contains. We propose dISCOVER, an unsupervised approach to identifying the multiple objects of interest in a scene from a collection of images. In order to achieve this, it exploits the consistency in foreground objects - in terms of occurrence and geometry - across the multiple images of the scene.", 
        "editor": [
          {
            "familyName": "Yagi", 
            "givenName": "Yasushi", 
            "type": "Person"
          }, 
          {
            "familyName": "Kang", 
            "givenName": "Sing Bing", 
            "type": "Person"
          }, 
          {
            "familyName": "Kweon", 
            "givenName": "In So", 
            "type": "Person"
          }, 
          {
            "familyName": "Zha", 
            "givenName": "Hongbin", 
            "type": "Person"
          }
        ], 
        "genre": "chapter", 
        "id": "sg:pub.10.1007/978-3-540-76390-1_48", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": {
          "isbn": [
            "978-3-540-76389-5"
          ], 
          "name": "Computer Vision \u2013 ACCV 2007", 
          "type": "Book"
        }, 
        "name": "Unsupervised Identification of Multiple Objects of Interest from Multiple Images: dISCOVER", 
        "pagination": "487-496", 
        "productId": [
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/978-3-540-76390-1_48"
            ]
          }, 
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "42fb4be6e09206d900880a07c74e0982d0129c1103837bd99a0b446d7c95b1b5"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1014060599"
            ]
          }
        ], 
        "publisher": {
          "location": "Berlin, Heidelberg", 
          "name": "Springer Berlin Heidelberg", 
          "type": "Organisation"
        }, 
        "sameAs": [
          "https://doi.org/10.1007/978-3-540-76390-1_48", 
          "https://app.dimensions.ai/details/publication/pub.1014060599"
        ], 
        "sdDataset": "chapters", 
        "sdDatePublished": "2019-04-16T05:47", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000347_0000000347/records_89814_00000000.jsonl", 
        "type": "Chapter", 
        "url": "https://link.springer.com/10.1007%2F978-3-540-76390-1_48"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-76390-1_48'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-76390-1_48'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-76390-1_48'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-76390-1_48'


     

    This table displays all metadata directly associated to this object as RDF triples.

    145 TRIPLES      23 PREDICATES      45 URIs      20 LITERALS      8 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/978-3-540-76390-1_48 schema:about anzsrc-for:17
    2 anzsrc-for:1701
    3 schema:author N68303e00c08f438e874770169fe78e08
    4 schema:citation sg:pub.10.1007/11922162_31
    5 sg:pub.10.1007/3-540-45054-8_2
    6 sg:pub.10.1007/3-540-47977-5_20
    7 sg:pub.10.1023/a:1007617005950
    8 sg:pub.10.1023/b:visi.0000029664.99615.94
    9 https://doi.org/10.1109/cvpr.2001.990529
    10 https://doi.org/10.1109/cvpr.2003.1211479
    11 https://doi.org/10.1109/cvpr.2004.1315071
    12 https://doi.org/10.1109/cvpr.2004.1315253
    13 https://doi.org/10.1109/cvpr.2005.16
    14 https://doi.org/10.1109/cvpr.2006.288
    15 https://doi.org/10.1109/cvpr.2006.326
    16 https://doi.org/10.1109/cvpr.2006.68
    17 https://doi.org/10.1109/cvprw.2006.192
    18 https://doi.org/10.1109/iccv.2005.142
    19 https://doi.org/10.1109/iccv.2005.152
    20 https://doi.org/10.1109/iccv.2005.20
    21 https://doi.org/10.1109/iccv.2005.77
    22 schema:datePublished 2007
    23 schema:datePublishedReg 2007-01-01
    24 schema:description Given a collection of images of offices, what would we say we see in the images? The objects of interest are likely to be monitors, keyboards, phones, etc. Such identification of the foreground in a scene is important to avoid distractions caused by background clutter and facilitates better understanding of the scene. It is crucial for such an identification to be unsupervised to avoid extensive human labeling as well as biases induced by human intervention. Most interesting scenes contain multiple objects of interest. Hence, it would be useful to separate the foreground into the multiple objects it contains. We propose dISCOVER, an unsupervised approach to identifying the multiple objects of interest in a scene from a collection of images. In order to achieve this, it exploits the consistency in foreground objects - in terms of occurrence and geometry - across the multiple images of the scene.
    25 schema:editor N9b8aa80408234318a94b45724af7b296
    26 schema:genre chapter
    27 schema:inLanguage en
    28 schema:isAccessibleForFree false
    29 schema:isPartOf Nec43fbcf78564cc6b72e1bb5579ee165
    30 schema:name Unsupervised Identification of Multiple Objects of Interest from Multiple Images: dISCOVER
    31 schema:pagination 487-496
    32 schema:productId N55d2ded175814d108830288dd039c003
    33 Nb4860cd7fff341e49c9991d73f40ef03
    34 Nc536cc66f5f740a6b40f4917e4bd263b
    35 schema:publisher N00c86ee0c68442ffa826835ea64ce507
    36 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014060599
    37 https://doi.org/10.1007/978-3-540-76390-1_48
    38 schema:sdDatePublished 2019-04-16T05:47
    39 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    40 schema:sdPublisher N9ec172f3115543728d3e74c4f09f0793
    41 schema:url https://link.springer.com/10.1007%2F978-3-540-76390-1_48
    42 sgo:license sg:explorer/license/
    43 sgo:sdDataset chapters
    44 rdf:type schema:Chapter
    45 N00c86ee0c68442ffa826835ea64ce507 schema:location Berlin, Heidelberg
    46 schema:name Springer Berlin Heidelberg
    47 rdf:type schema:Organisation
    48 N379a4fc3705141239f4051ea945260a3 schema:familyName Kweon
    49 schema:givenName In So
    50 rdf:type schema:Person
    51 N4c3f28a3e06f46218c95e5373f9d17d7 schema:familyName Yagi
    52 schema:givenName Yasushi
    53 rdf:type schema:Person
    54 N55d2ded175814d108830288dd039c003 schema:name doi
    55 schema:value 10.1007/978-3-540-76390-1_48
    56 rdf:type schema:PropertyValue
    57 N68303e00c08f438e874770169fe78e08 rdf:first sg:person.01310632454.16
    58 rdf:rest N99ec2e6386a0409d9b64889ab366c231
    59 N99ec2e6386a0409d9b64889ab366c231 rdf:first sg:person.012245072625.31
    60 rdf:rest rdf:nil
    61 N9b8aa80408234318a94b45724af7b296 rdf:first N4c3f28a3e06f46218c95e5373f9d17d7
    62 rdf:rest Ne5b8e7ac262f4d4782bcb77f3c891f3c
    63 N9ec172f3115543728d3e74c4f09f0793 schema:name Springer Nature - SN SciGraph project
    64 rdf:type schema:Organization
    65 Naafeeb2f874248fabb87ad2c255b54e5 rdf:first Nd182ad812306479b98f1494cc31df82e
    66 rdf:rest rdf:nil
    67 Nb4860cd7fff341e49c9991d73f40ef03 schema:name dimensions_id
    68 schema:value pub.1014060599
    69 rdf:type schema:PropertyValue
    70 Nb5ccee971cee4ed0bbe2a5bea3109781 rdf:first N379a4fc3705141239f4051ea945260a3
    71 rdf:rest Naafeeb2f874248fabb87ad2c255b54e5
    72 Nc536cc66f5f740a6b40f4917e4bd263b schema:name readcube_id
    73 schema:value 42fb4be6e09206d900880a07c74e0982d0129c1103837bd99a0b446d7c95b1b5
    74 rdf:type schema:PropertyValue
    75 Nd182ad812306479b98f1494cc31df82e schema:familyName Zha
    76 schema:givenName Hongbin
    77 rdf:type schema:Person
    78 Nda7a14c20c314369a053228e956eeeb2 schema:familyName Kang
    79 schema:givenName Sing Bing
    80 rdf:type schema:Person
    81 Ne5b8e7ac262f4d4782bcb77f3c891f3c rdf:first Nda7a14c20c314369a053228e956eeeb2
    82 rdf:rest Nb5ccee971cee4ed0bbe2a5bea3109781
    83 Nec43fbcf78564cc6b72e1bb5579ee165 schema:isbn 978-3-540-76389-5
    84 schema:name Computer Vision – ACCV 2007
    85 rdf:type schema:Book
    86 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
    87 schema:name Psychology and Cognitive Sciences
    88 rdf:type schema:DefinedTerm
    89 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
    90 schema:name Psychology
    91 rdf:type schema:DefinedTerm
    92 sg:person.012245072625.31 schema:affiliation https://www.grid.ac/institutes/grid.147455.6
    93 schema:familyName Chen
    94 schema:givenName Tsuhan
    95 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012245072625.31
    96 rdf:type schema:Person
    97 sg:person.01310632454.16 schema:affiliation https://www.grid.ac/institutes/grid.147455.6
    98 schema:familyName Parikh
    99 schema:givenName Devi
    100 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01310632454.16
    101 rdf:type schema:Person
    102 sg:pub.10.1007/11922162_31 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014348238
    103 https://doi.org/10.1007/11922162_31
    104 rdf:type schema:CreativeWork
    105 sg:pub.10.1007/3-540-45054-8_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014228158
    106 https://doi.org/10.1007/3-540-45054-8_2
    107 rdf:type schema:CreativeWork
    108 sg:pub.10.1007/3-540-47977-5_20 schema:sameAs https://app.dimensions.ai/details/publication/pub.1003775768
    109 https://doi.org/10.1007/3-540-47977-5_20
    110 rdf:type schema:CreativeWork
    111 sg:pub.10.1023/a:1007617005950 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016248609
    112 https://doi.org/10.1023/a:1007617005950
    113 rdf:type schema:CreativeWork
    114 sg:pub.10.1023/b:visi.0000029664.99615.94 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052687286
    115 https://doi.org/10.1023/b:visi.0000029664.99615.94
    116 rdf:type schema:CreativeWork
    117 https://doi.org/10.1109/cvpr.2001.990529 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093171820
    118 rdf:type schema:CreativeWork
    119 https://doi.org/10.1109/cvpr.2003.1211479 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093624919
    120 rdf:type schema:CreativeWork
    121 https://doi.org/10.1109/cvpr.2004.1315071 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094292261
    122 rdf:type schema:CreativeWork
    123 https://doi.org/10.1109/cvpr.2004.1315253 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093697819
    124 rdf:type schema:CreativeWork
    125 https://doi.org/10.1109/cvpr.2005.16 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095244523
    126 rdf:type schema:CreativeWork
    127 https://doi.org/10.1109/cvpr.2006.288 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094648993
    128 rdf:type schema:CreativeWork
    129 https://doi.org/10.1109/cvpr.2006.326 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095068040
    130 rdf:type schema:CreativeWork
    131 https://doi.org/10.1109/cvpr.2006.68 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094512911
    132 rdf:type schema:CreativeWork
    133 https://doi.org/10.1109/cvprw.2006.192 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093480652
    134 rdf:type schema:CreativeWork
    135 https://doi.org/10.1109/iccv.2005.142 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094700637
    136 rdf:type schema:CreativeWork
    137 https://doi.org/10.1109/iccv.2005.152 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094301320
    138 rdf:type schema:CreativeWork
    139 https://doi.org/10.1109/iccv.2005.20 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094741769
    140 rdf:type schema:CreativeWork
    141 https://doi.org/10.1109/iccv.2005.77 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094132829
    142 rdf:type schema:CreativeWork
    143 https://www.grid.ac/institutes/grid.147455.6 schema:alternateName Carnegie Mellon University
    144 schema:name Carnegie Mellon University
    145 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...