Modelling semantic concepts in an embedding space as distributions


Ontology type: sgo:Patent     


Patent Info

DATE

N/A

AUTHORS

HAILIN JIN , ZHOU REN , ZHE LIN , CHEN FANG

ABSTRACT

Computer implemented annotation of images using determined text labels 308, 310, 312 to describe image content, comprising: generating an embedded space 302 representing both images 314, 316, 318 and text labels 308, 310, 312; determining, using the embedded space, a text label describing a depicted concept in the image content; annotating the image by associating the determined text label. Generating the embedded space comprises: computing distributions (e.g. stars, +, X) to represent semantic clusters, e.g. groups of data of similar themes, in the embedded space, the semantic clusters being described by text labels 308, 310, 312 or depicted in image content 314, 316, 318; mapping representative images to the distributions of the embedded space. The distributions may use Gaussian distributions to represent semantic concepts. Determining text labels may include computing distances between embeddings of semantically similar regions of the image, and the distributions of semantic clusters, within the embedding space. Annotating images in this way essentially comprises isolating a depicted concept from an image, mapping it to the correct semantic cluster, searching the surrounding cluster for labels, retrieving the labels to annotate the image, where cluster items which are close in distance are considered to have similar themes. More... »

Related SciGraph Publications

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "name": "HAILIN JIN", 
        "type": "Person"
      }, 
      {
        "name": "ZHOU REN", 
        "type": "Person"
      }, 
      {
        "name": "ZHE LIN", 
        "type": "Person"
      }, 
      {
        "name": "CHEN FANG", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/s10994-010-5198-3", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1001978802", 
          "https://doi.org/10.1007/s10994-010-5198-3"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2016.251", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095693789"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "description": "

Computer implemented annotation of images using determined text labels 308, 310, 312 to describe image content, comprising: generating an embedded space 302 representing both images 314, 316, 318 and text labels 308, 310, 312; determining, using the embedded space, a text label describing a depicted concept in the image content; annotating the image by associating the determined text label. Generating the embedded space comprises: computing distributions (e.g. stars, +, X) to represent semantic clusters, e.g. groups of data of similar themes, in the embedded space, the semantic clusters being described by text labels 308, 310, 312 or depicted in image content 314, 316, 318; mapping representative images to the distributions of the embedded space. The distributions may use Gaussian distributions to represent semantic concepts. Determining text labels may include computing distances between embeddings of semantically similar regions of the image, and the distributions of semantic clusters, within the embedding space. Annotating images in this way essentially comprises isolating a depicted concept from an image, mapping it to the correct semantic cluster, searching the surrounding cluster for labels, retrieving the labels to annotate the image, where cluster items which are close in distance are considered to have similar themes.

", "id": "sg:patent.GB-2546368-A", "keywords": [ "modelling", "embedding", "distribution", "computer", "annotation", "label", "image content", "text", "concept", "Generating", "star", "semantics", "theme", "content", "representative", "determining", "distance", "similar region", "annotating", "mapping", "cluster" ], "name": "Modelling semantic concepts in an embedding space as distributions", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.467212.4", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/GB-2546368-A" ], "sdDataset": "patents", "sdDatePublished": "2019-03-07T15:31", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com.uberresearch.data.dev.patents-pipeline/full_run_10/sn-export/5eb3e5a348d7f117b22cc85fb0b02730/0000100128-0000348334/json_export_0db08f31.jsonl", "type": "Patent" } ]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.GB-2546368-A'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.GB-2546368-A'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.GB-2546368-A'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.GB-2546368-A'


 

This table displays all metadata directly associated to this object as RDF triples.

61 TRIPLES      14 PREDICATES      36 URIs      28 LITERALS      2 BLANK NODES

Subject Predicate Object
1 sg:patent.GB-2546368-A schema:about anzsrc-for:2746
2 schema:author Nd402497bd8b54726ada854f6772ff208
3 schema:citation sg:pub.10.1007/s10994-010-5198-3
4 https://doi.org/10.1109/cvpr.2016.251
5 schema:description <p>Computer implemented annotation of images using determined text labels 308, 310, 312 to describe image content, comprising: generating an embedded space 302 representing both images 314, 316, 318 and text labels 308, 310, 312; determining, using the embedded space, a text label describing a depicted concept in the image content; annotating the image by associating the determined text label. Generating the embedded space comprises: computing distributions (e.g. stars, +, X) to represent semantic clusters, e.g. groups of data of similar themes, in the embedded space, the semantic clusters being described by text labels 308, 310, 312 or depicted in image content 314, 316, 318; mapping representative images to the distributions of the embedded space. The distributions may use Gaussian distributions to represent semantic concepts. Determining text labels may include computing distances between embeddings of semantically similar regions of the image, and the distributions of semantic clusters, within the embedding space. Annotating images in this way essentially comprises isolating a depicted concept from an image, mapping it to the correct semantic cluster, searching the surrounding cluster for labels, retrieving the labels to annotate the image, where cluster items which are close in distance are considered to have similar themes.</p>
6 schema:keywords Generating
7 annotating
8 annotation
9 cluster
10 computer
11 concept
12 content
13 determining
14 distance
15 distribution
16 embedding
17 image content
18 label
19 mapping
20 modelling
21 representative
22 semantics
23 similar region
24 star
25 text
26 theme
27 schema:name Modelling semantic concepts in an embedding space as distributions
28 schema:recipient https://www.grid.ac/institutes/grid.467212.4
29 schema:sameAs https://app.dimensions.ai/details/patent/GB-2546368-A
30 schema:sdDatePublished 2019-03-07T15:31
31 schema:sdLicense https://scigraph.springernature.com/explorer/license/
32 schema:sdPublisher N155ea7e01dd249368b89a39da739d43e
33 sgo:license sg:explorer/license/
34 sgo:sdDataset patents
35 rdf:type sgo:Patent
36 N0de7fd1f80cc4a84bbeae798873d8527 schema:name ZHOU REN
37 rdf:type schema:Person
38 N155ea7e01dd249368b89a39da739d43e schema:name Springer Nature - SN SciGraph project
39 rdf:type schema:Organization
40 N388d124f5e9a4f06bdd641a0dce65fcf rdf:first N0de7fd1f80cc4a84bbeae798873d8527
41 rdf:rest Nfe34c6aee6f54688a3a6760312ef3ecb
42 N86dcb575bba34f95a29035c33714678a schema:name ZHE LIN
43 rdf:type schema:Person
44 Nc80f944ce076426fa6fc620de17405b3 schema:name CHEN FANG
45 rdf:type schema:Person
46 Nd10815609a99425eb6e7747200b342f8 schema:name HAILIN JIN
47 rdf:type schema:Person
48 Nd402497bd8b54726ada854f6772ff208 rdf:first Nd10815609a99425eb6e7747200b342f8
49 rdf:rest N388d124f5e9a4f06bdd641a0dce65fcf
50 Nee06eaf2451a450e8925133f0e1c321d rdf:first Nc80f944ce076426fa6fc620de17405b3
51 rdf:rest rdf:nil
52 Nfe34c6aee6f54688a3a6760312ef3ecb rdf:first N86dcb575bba34f95a29035c33714678a
53 rdf:rest Nee06eaf2451a450e8925133f0e1c321d
54 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
55 rdf:type schema:DefinedTerm
56 sg:pub.10.1007/s10994-010-5198-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001978802
57 https://doi.org/10.1007/s10994-010-5198-3
58 rdf:type schema:CreativeWork
59 https://doi.org/10.1109/cvpr.2016.251 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095693789
60 rdf:type schema:CreativeWork
61 https://www.grid.ac/institutes/grid.467212.4 schema:Organization
 




Preview window. Press ESC to close (or click here)


...