Method and system for meshing human and computer competencies for object categorization


Ontology type: sgo:Patent     


Patent Info

DATE

2014-04-01T00:00

AUTHORS

Ashish Kapoor , Eric J. Horvitz , Desney S. Tan , Pradeep U. Shenoy

ABSTRACT

The subject disclosure relates to a method and system for visual object categorization. The method and system include receiving human inputs including data corresponding to passive human-brain responses to visualization of images. Computer inputs are also received which include data corresponding to outputs from a computerized vision-based processing of the images. The human and computer inputs are processing so as to yield a categorization for the images as a function of the human and computer inputs. More... »

Related SciGraph Publications

  • 2004-11. Distinctive Image Features from Scale-Invariant Keypoints in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2005-01. The Combination of Text Classifiers Using Reliability Indicators in INFORMATION RETRIEVAL JOURNAL
  • 2007-06. Local Features and Kernels for Classification of Texture and Object Categories: A Comprehensive Study in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 1996-08. Bagging predictors in MACHINE LEARNING
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "name": "Ashish Kapoor", 
            "type": "Person"
          }, 
          {
            "name": "Eric J. Horvitz", 
            "type": "Person"
          }, 
          {
            "name": "Desney S. Tan", 
            "type": "Person"
          }, 
          {
            "name": "Pradeep U. Shenoy", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/bf00058655", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1002929950", 
              "https://doi.org/10.1007/bf00058655"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00058655", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1002929950", 
              "https://doi.org/10.1007/bf00058655"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.cviu.2005.09.012", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004784969"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0013-4694(88)90149-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005445238"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0013-4694(88)90149-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005445238"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-006-9794-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1008205152", 
              "https://doi.org/10.1007/s11263-006-9794-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.tics.2006.10.012", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1011093935"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1088/1741-2560/1/2/001", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014355603"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.patrec.2008.01.030", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017355298"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.cogbrainres.2003.11.010", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029041858"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1093/cercor/bhg111", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032469247"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/581571.581573", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1034982779"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.cviu.2004.02.004", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040501891"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:inrt.0000048491.59134.94", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1043889433", 
              "https://doi.org/10.1023/b:inrt.0000048491.59134.94"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:visi.0000029664.99615.94", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052687286", 
              "https://doi.org/10.1023/b:visi.0000029664.99615.94"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.cub.2007.08.048", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1053173106"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.cub.2007.08.048", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1053173106"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1162/neco.1992.4.4.590", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1053589321"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/34.667881", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061156743"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/78.790663", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061230762"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tnsre.2006.875550", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061740167"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1613/jair.2005", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1105579395"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2014-04-01T00:00", 
        "description": "

    The subject disclosure relates to a method and system for visual object categorization. The method and system include receiving human inputs including data corresponding to passive human-brain responses to visualization of images. Computer inputs are also received which include data corresponding to outputs from a computerized vision-based processing of the images. The human and computer inputs are processing so as to yield a categorization for the images as a function of the human and computer inputs.

    ", "id": "sg:patent.US-8688208-B2", "keywords": [ "method", "meshing", "competency", "categorization", "disclosure", "visual object", "human input", "brain response", "visualization", "computer", "output", "processing" ], "name": "Method and system for meshing human and computer competencies for object categorization", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.419815.0", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/US-8688208-B2" ], "sdDataset": "patents", "sdDatePublished": "2019-04-18T10:12", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com-uberresearch-data-patents-target-20190320-rc/data/sn-export/402f166718b70575fb5d4ffe01f064d1/0000100128-0000352499/json_export_00754.jsonl", "type": "Patent" } ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.US-8688208-B2'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.US-8688208-B2'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.US-8688208-B2'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.US-8688208-B2'


     

    This table displays all metadata directly associated to this object as RDF triples.

    107 TRIPLES      15 PREDICATES      45 URIs      20 LITERALS      2 BLANK NODES

    Subject Predicate Object
    1 sg:patent.US-8688208-B2 schema:about anzsrc-for:2746
    2 schema:author Nb273151d58984a2aa74ed99e48d5bf1e
    3 schema:citation sg:pub.10.1007/bf00058655
    4 sg:pub.10.1007/s11263-006-9794-4
    5 sg:pub.10.1023/b:inrt.0000048491.59134.94
    6 sg:pub.10.1023/b:visi.0000029664.99615.94
    7 https://doi.org/10.1016/0013-4694(88)90149-6
    8 https://doi.org/10.1016/j.cogbrainres.2003.11.010
    9 https://doi.org/10.1016/j.cub.2007.08.048
    10 https://doi.org/10.1016/j.cviu.2004.02.004
    11 https://doi.org/10.1016/j.cviu.2005.09.012
    12 https://doi.org/10.1016/j.patrec.2008.01.030
    13 https://doi.org/10.1016/j.tics.2006.10.012
    14 https://doi.org/10.1088/1741-2560/1/2/001
    15 https://doi.org/10.1093/cercor/bhg111
    16 https://doi.org/10.1109/34.667881
    17 https://doi.org/10.1109/78.790663
    18 https://doi.org/10.1109/tnsre.2006.875550
    19 https://doi.org/10.1145/581571.581573
    20 https://doi.org/10.1162/neco.1992.4.4.590
    21 https://doi.org/10.1613/jair.2005
    22 schema:datePublished 2014-04-01T00:00
    23 schema:description <p num="p-0001">The subject disclosure relates to a method and system for visual object categorization. The method and system include receiving human inputs including data corresponding to passive human-brain responses to visualization of images. Computer inputs are also received which include data corresponding to outputs from a computerized vision-based processing of the images. The human and computer inputs are processing so as to yield a categorization for the images as a function of the human and computer inputs.</p>
    24 schema:keywords brain response
    25 categorization
    26 competency
    27 computer
    28 disclosure
    29 human input
    30 meshing
    31 method
    32 output
    33 processing
    34 visual object
    35 visualization
    36 schema:name Method and system for meshing human and computer competencies for object categorization
    37 schema:recipient https://www.grid.ac/institutes/grid.419815.0
    38 schema:sameAs https://app.dimensions.ai/details/patent/US-8688208-B2
    39 schema:sdDatePublished 2019-04-18T10:12
    40 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    41 schema:sdPublisher N25d2f6226d5d4d32bb5c1a6b999d6880
    42 sgo:license sg:explorer/license/
    43 sgo:sdDataset patents
    44 rdf:type sgo:Patent
    45 N2224a1fcc6814580a8e738d2661a5101 schema:name Desney S. Tan
    46 rdf:type schema:Person
    47 N23047253b94d45988a47f085c9105f85 rdf:first Na2263daf9ef0444984770e948cbc580e
    48 rdf:rest rdf:nil
    49 N25d2f6226d5d4d32bb5c1a6b999d6880 schema:name Springer Nature - SN SciGraph project
    50 rdf:type schema:Organization
    51 N609bd2e63622403a94cf005076b31164 rdf:first N6b09877e826e47fcbeba2fb691147c2d
    52 rdf:rest Nf070121d3f6a435285a9355f3b131b88
    53 N6b09877e826e47fcbeba2fb691147c2d schema:name Eric J. Horvitz
    54 rdf:type schema:Person
    55 N82363a44bca64c44a5dedd7a330959d4 schema:name Ashish Kapoor
    56 rdf:type schema:Person
    57 Na2263daf9ef0444984770e948cbc580e schema:name Pradeep U. Shenoy
    58 rdf:type schema:Person
    59 Nb273151d58984a2aa74ed99e48d5bf1e rdf:first N82363a44bca64c44a5dedd7a330959d4
    60 rdf:rest N609bd2e63622403a94cf005076b31164
    61 Nf070121d3f6a435285a9355f3b131b88 rdf:first N2224a1fcc6814580a8e738d2661a5101
    62 rdf:rest N23047253b94d45988a47f085c9105f85
    63 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
    64 rdf:type schema:DefinedTerm
    65 sg:pub.10.1007/bf00058655 schema:sameAs https://app.dimensions.ai/details/publication/pub.1002929950
    66 https://doi.org/10.1007/bf00058655
    67 rdf:type schema:CreativeWork
    68 sg:pub.10.1007/s11263-006-9794-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008205152
    69 https://doi.org/10.1007/s11263-006-9794-4
    70 rdf:type schema:CreativeWork
    71 sg:pub.10.1023/b:inrt.0000048491.59134.94 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043889433
    72 https://doi.org/10.1023/b:inrt.0000048491.59134.94
    73 rdf:type schema:CreativeWork
    74 sg:pub.10.1023/b:visi.0000029664.99615.94 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052687286
    75 https://doi.org/10.1023/b:visi.0000029664.99615.94
    76 rdf:type schema:CreativeWork
    77 https://doi.org/10.1016/0013-4694(88)90149-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005445238
    78 rdf:type schema:CreativeWork
    79 https://doi.org/10.1016/j.cogbrainres.2003.11.010 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029041858
    80 rdf:type schema:CreativeWork
    81 https://doi.org/10.1016/j.cub.2007.08.048 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053173106
    82 rdf:type schema:CreativeWork
    83 https://doi.org/10.1016/j.cviu.2004.02.004 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040501891
    84 rdf:type schema:CreativeWork
    85 https://doi.org/10.1016/j.cviu.2005.09.012 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004784969
    86 rdf:type schema:CreativeWork
    87 https://doi.org/10.1016/j.patrec.2008.01.030 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017355298
    88 rdf:type schema:CreativeWork
    89 https://doi.org/10.1016/j.tics.2006.10.012 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011093935
    90 rdf:type schema:CreativeWork
    91 https://doi.org/10.1088/1741-2560/1/2/001 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014355603
    92 rdf:type schema:CreativeWork
    93 https://doi.org/10.1093/cercor/bhg111 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032469247
    94 rdf:type schema:CreativeWork
    95 https://doi.org/10.1109/34.667881 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156743
    96 rdf:type schema:CreativeWork
    97 https://doi.org/10.1109/78.790663 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061230762
    98 rdf:type schema:CreativeWork
    99 https://doi.org/10.1109/tnsre.2006.875550 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061740167
    100 rdf:type schema:CreativeWork
    101 https://doi.org/10.1145/581571.581573 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034982779
    102 rdf:type schema:CreativeWork
    103 https://doi.org/10.1162/neco.1992.4.4.590 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053589321
    104 rdf:type schema:CreativeWork
    105 https://doi.org/10.1613/jair.2005 schema:sameAs https://app.dimensions.ai/details/publication/pub.1105579395
    106 rdf:type schema:CreativeWork
    107 https://www.grid.ac/institutes/grid.419815.0 schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...