TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-class Object Recognition and Segmentation View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2006

AUTHORS

Jamie Shotton , John Winn , Carsten Rother , Antonio Criminisi

ABSTRACT

This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow). More... »

PAGES

1-15

Book

TITLE

Computer Vision – ECCV 2006

ISBN

978-3-540-33832-1
978-3-540-33833-8

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/11744023_1

DOI

http://dx.doi.org/10.1007/11744023_1

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1017544873


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of Cambridge", 
          "id": "https://www.grid.ac/institutes/grid.5335.0", 
          "name": [
            "Department of Engineering, University of Cambridge, UK"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Shotton", 
        "givenName": "Jamie", 
        "id": "sg:person.013445405632.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013445405632.17"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Microsoft Research (United Kingdom)", 
          "id": "https://www.grid.ac/institutes/grid.24488.32", 
          "name": [
            "Microsoft Research Ltd., Cambridge, UK"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Winn", 
        "givenName": "John", 
        "id": "sg:person.01221574626.63", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01221574626.63"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Microsoft Research (United Kingdom)", 
          "id": "https://www.grid.ac/institutes/grid.24488.32", 
          "name": [
            "Microsoft Research Ltd., Cambridge, UK"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Rother", 
        "givenName": "Carsten", 
        "id": "sg:person.0621771321.07", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0621771321.07"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Microsoft Research (United Kingdom)", 
          "id": "https://www.grid.ac/institutes/grid.24488.32", 
          "name": [
            "Microsoft Research Ltd., Cambridge, UK"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Criminisi", 
        "givenName": "Antonio", 
        "id": "sg:person.0674563210.87", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0674563210.87"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1214/aos/1016218223", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1020629296"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-47979-1_7", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040055518", 
          "https://doi.org/10.1007/3-540-47979-1_7"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-005-4635-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1044718379", 
          "https://doi.org/10.1007/s11263-005-4635-4"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1023/a:1011126920638", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046312359", 
          "https://doi.org/10.1023/a:1011126920638"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.993558", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061157405"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2001.990517", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093187020"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2004.314", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093364701"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.249", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093441038"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2003.1211479", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093624919"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2004.1315232", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094440093"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.320", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094611604"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2005.171", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094707806"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2004.1315241", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094870330"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2005.148", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095125400"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2001.937505", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095383001"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2005.9", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095480350"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2000.855809", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095602151"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.5244/c.17.78", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1099383029"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2006", 
    "datePublishedReg": "2006-01-01", 
    "description": "This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow).", 
    "editor": [
      {
        "familyName": "Leonardis", 
        "givenName": "Ale\u0161", 
        "type": "Person"
      }, 
      {
        "familyName": "Bischof", 
        "givenName": "Horst", 
        "type": "Person"
      }, 
      {
        "familyName": "Pinz", 
        "givenName": "Axel", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/11744023_1", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-540-33832-1", 
        "978-3-540-33833-8"
      ], 
      "name": "Computer Vision \u2013 ECCV 2006", 
      "type": "Book"
    }, 
    "name": "TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-class Object Recognition and Segmentation", 
    "pagination": "1-15", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1017544873"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/11744023_1"
        ]
      }, 
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "53971ce5ca9ba85a8a74ab8ab32ddf659cc2c3163f825c7dfc75e5168a0a7585"
        ]
      }
    ], 
    "publisher": {
      "location": "Berlin, Heidelberg", 
      "name": "Springer Berlin Heidelberg", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/11744023_1", 
      "https://app.dimensions.ai/details/publication/pub.1017544873"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2019-04-16T07:30", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000356_0000000356/records_57883_00000000.jsonl", 
    "type": "Chapter", 
    "url": "https://link.springer.com/10.1007%2F11744023_1"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/11744023_1'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/11744023_1'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/11744023_1'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/11744023_1'


 

This table displays all metadata directly associated to this object as RDF triples.

156 TRIPLES      23 PREDICATES      45 URIs      20 LITERALS      8 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/11744023_1 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N210b357fff6f49b1b2065dd505afac6e
4 schema:citation sg:pub.10.1007/3-540-47979-1_7
5 sg:pub.10.1007/s11263-005-4635-4
6 sg:pub.10.1023/a:1011126920638
7 https://doi.org/10.1109/34.993558
8 https://doi.org/10.1109/cvpr.2000.855809
9 https://doi.org/10.1109/cvpr.2001.990517
10 https://doi.org/10.1109/cvpr.2003.1211479
11 https://doi.org/10.1109/cvpr.2004.1315232
12 https://doi.org/10.1109/cvpr.2004.1315241
13 https://doi.org/10.1109/cvpr.2004.314
14 https://doi.org/10.1109/cvpr.2005.249
15 https://doi.org/10.1109/cvpr.2005.320
16 https://doi.org/10.1109/iccv.2001.937505
17 https://doi.org/10.1109/iccv.2005.148
18 https://doi.org/10.1109/iccv.2005.171
19 https://doi.org/10.1109/iccv.2005.9
20 https://doi.org/10.1214/aos/1016218223
21 https://doi.org/10.5244/c.17.78
22 schema:datePublished 2006
23 schema:datePublishedReg 2006-01-01
24 schema:description This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow).
25 schema:editor N7eaf9a4bd65a482face163f4948c9635
26 schema:genre chapter
27 schema:inLanguage en
28 schema:isAccessibleForFree false
29 schema:isPartOf Nadb9bcc304ff49d2b316ba8396d11fa6
30 schema:name TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-class Object Recognition and Segmentation
31 schema:pagination 1-15
32 schema:productId N4336732d0df34a2aa442b52adef03bbc
33 Nb5ac0105a08f45c796e35c7bb8e249f0
34 Nc7123fe41eb947bdac41ada6abd6f594
35 schema:publisher N708697e2e79d427f975e239e42e2f76f
36 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017544873
37 https://doi.org/10.1007/11744023_1
38 schema:sdDatePublished 2019-04-16T07:30
39 schema:sdLicense https://scigraph.springernature.com/explorer/license/
40 schema:sdPublisher N09204a91a7824c17b9ac5ef484b735de
41 schema:url https://link.springer.com/10.1007%2F11744023_1
42 sgo:license sg:explorer/license/
43 sgo:sdDataset chapters
44 rdf:type schema:Chapter
45 N06fb1869cc904b3eac0648137df68552 rdf:first Na33caa98a5bd4be7835b32d6b5da6710
46 rdf:rest rdf:nil
47 N09204a91a7824c17b9ac5ef484b735de schema:name Springer Nature - SN SciGraph project
48 rdf:type schema:Organization
49 N0bad27588e1b45e3a95369b6ec14aaa4 rdf:first sg:person.01221574626.63
50 rdf:rest N869712db4b3b4fea86ae6eeea0812374
51 N210b357fff6f49b1b2065dd505afac6e rdf:first sg:person.013445405632.17
52 rdf:rest N0bad27588e1b45e3a95369b6ec14aaa4
53 N22d3e496c0e141da906db26cd83db6a3 schema:familyName Leonardis
54 schema:givenName Aleš
55 rdf:type schema:Person
56 N4336732d0df34a2aa442b52adef03bbc schema:name readcube_id
57 schema:value 53971ce5ca9ba85a8a74ab8ab32ddf659cc2c3163f825c7dfc75e5168a0a7585
58 rdf:type schema:PropertyValue
59 N6b9b8327d80347b38ddba2f96ce59bc0 rdf:first sg:person.0674563210.87
60 rdf:rest rdf:nil
61 N708697e2e79d427f975e239e42e2f76f schema:location Berlin, Heidelberg
62 schema:name Springer Berlin Heidelberg
63 rdf:type schema:Organisation
64 N711230d91e204ef4af562dd88bf9b4ee schema:familyName Bischof
65 schema:givenName Horst
66 rdf:type schema:Person
67 N7eaf9a4bd65a482face163f4948c9635 rdf:first N22d3e496c0e141da906db26cd83db6a3
68 rdf:rest Na7f31c6c1f6342349b33de34264b04d5
69 N869712db4b3b4fea86ae6eeea0812374 rdf:first sg:person.0621771321.07
70 rdf:rest N6b9b8327d80347b38ddba2f96ce59bc0
71 Na33caa98a5bd4be7835b32d6b5da6710 schema:familyName Pinz
72 schema:givenName Axel
73 rdf:type schema:Person
74 Na7f31c6c1f6342349b33de34264b04d5 rdf:first N711230d91e204ef4af562dd88bf9b4ee
75 rdf:rest N06fb1869cc904b3eac0648137df68552
76 Nadb9bcc304ff49d2b316ba8396d11fa6 schema:isbn 978-3-540-33832-1
77 978-3-540-33833-8
78 schema:name Computer Vision – ECCV 2006
79 rdf:type schema:Book
80 Nb5ac0105a08f45c796e35c7bb8e249f0 schema:name dimensions_id
81 schema:value pub.1017544873
82 rdf:type schema:PropertyValue
83 Nc7123fe41eb947bdac41ada6abd6f594 schema:name doi
84 schema:value 10.1007/11744023_1
85 rdf:type schema:PropertyValue
86 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
87 schema:name Information and Computing Sciences
88 rdf:type schema:DefinedTerm
89 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
90 schema:name Artificial Intelligence and Image Processing
91 rdf:type schema:DefinedTerm
92 sg:person.01221574626.63 schema:affiliation https://www.grid.ac/institutes/grid.24488.32
93 schema:familyName Winn
94 schema:givenName John
95 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01221574626.63
96 rdf:type schema:Person
97 sg:person.013445405632.17 schema:affiliation https://www.grid.ac/institutes/grid.5335.0
98 schema:familyName Shotton
99 schema:givenName Jamie
100 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013445405632.17
101 rdf:type schema:Person
102 sg:person.0621771321.07 schema:affiliation https://www.grid.ac/institutes/grid.24488.32
103 schema:familyName Rother
104 schema:givenName Carsten
105 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0621771321.07
106 rdf:type schema:Person
107 sg:person.0674563210.87 schema:affiliation https://www.grid.ac/institutes/grid.24488.32
108 schema:familyName Criminisi
109 schema:givenName Antonio
110 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0674563210.87
111 rdf:type schema:Person
112 sg:pub.10.1007/3-540-47979-1_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040055518
113 https://doi.org/10.1007/3-540-47979-1_7
114 rdf:type schema:CreativeWork
115 sg:pub.10.1007/s11263-005-4635-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044718379
116 https://doi.org/10.1007/s11263-005-4635-4
117 rdf:type schema:CreativeWork
118 sg:pub.10.1023/a:1011126920638 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046312359
119 https://doi.org/10.1023/a:1011126920638
120 rdf:type schema:CreativeWork
121 https://doi.org/10.1109/34.993558 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157405
122 rdf:type schema:CreativeWork
123 https://doi.org/10.1109/cvpr.2000.855809 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095602151
124 rdf:type schema:CreativeWork
125 https://doi.org/10.1109/cvpr.2001.990517 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093187020
126 rdf:type schema:CreativeWork
127 https://doi.org/10.1109/cvpr.2003.1211479 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093624919
128 rdf:type schema:CreativeWork
129 https://doi.org/10.1109/cvpr.2004.1315232 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094440093
130 rdf:type schema:CreativeWork
131 https://doi.org/10.1109/cvpr.2004.1315241 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094870330
132 rdf:type schema:CreativeWork
133 https://doi.org/10.1109/cvpr.2004.314 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093364701
134 rdf:type schema:CreativeWork
135 https://doi.org/10.1109/cvpr.2005.249 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093441038
136 rdf:type schema:CreativeWork
137 https://doi.org/10.1109/cvpr.2005.320 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094611604
138 rdf:type schema:CreativeWork
139 https://doi.org/10.1109/iccv.2001.937505 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095383001
140 rdf:type schema:CreativeWork
141 https://doi.org/10.1109/iccv.2005.148 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095125400
142 rdf:type schema:CreativeWork
143 https://doi.org/10.1109/iccv.2005.171 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094707806
144 rdf:type schema:CreativeWork
145 https://doi.org/10.1109/iccv.2005.9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095480350
146 rdf:type schema:CreativeWork
147 https://doi.org/10.1214/aos/1016218223 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020629296
148 rdf:type schema:CreativeWork
149 https://doi.org/10.5244/c.17.78 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099383029
150 rdf:type schema:CreativeWork
151 https://www.grid.ac/institutes/grid.24488.32 schema:alternateName Microsoft Research (United Kingdom)
152 schema:name Microsoft Research Ltd., Cambridge, UK
153 rdf:type schema:Organization
154 https://www.grid.ac/institutes/grid.5335.0 schema:alternateName University of Cambridge
155 schema:name Department of Engineering, University of Cambridge, UK
156 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...