The Eigen-Transform and Applications View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2006

AUTHORS

Alireza Tavakoli Targhi , Eric Hayman , Jan-Olof Eklundh , Mehrdad Shahshahani

ABSTRACT

This paper introduces a novel texture descriptor, the Eigen-transform. The transform provides a measure of roughness by considering the eigenvalues of a matrix which is formed very simply by inserting the greyvalues of a square patch around a pixel directly into a matrix of the same size. The eigenvalue of largest magnitude turns out to give a smoothed version of the original image, but the eigenvalues of smaller magnitude encode high frequency information characteristic of natural textures. A major advantage of the Eigen-transform is that it does not fire on straight, or locally straight, brightness edges, instead it reacts almost entirely to the texture itself. This is in contrast to many other descriptors such as Gabor filters or the standard deviation of greyvalues of the patch. These properties make it remarkably well suited to practical applications. Our experiments focus on two main areas. The first is in bottom-up visual attention where textured objects pop out from the background using the Eigen-transform. The second is unsupervised texture segmentation with particular emphasis on real-world, cluttered indoor environments. We compare results with other state-of-the-art methods and find that the Eigen-transform is highly competitive, despite its simplicity and low dimensionality. More... »

PAGES

70-79

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/11612032_8

DOI

http://dx.doi.org/10.1007/11612032_8

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1009726117


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Targhi", 
        "givenName": "Alireza Tavakoli", 
        "id": "sg:person.011760054571.70", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011760054571.70"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Hayman", 
        "givenName": "Eric", 
        "id": "sg:person.010203264647.00", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010203264647.00"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Eklundh", 
        "givenName": "Jan-Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran, Iran", 
          "id": "http://www.grid.ac/institutes/grid.418744.a", 
          "name": [
            "Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran, Iran"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Shahshahani", 
        "givenName": "Mehrdad", 
        "id": "sg:person.010365113571.00", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010365113571.00"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2006", 
    "datePublishedReg": "2006-01-01", 
    "description": "This paper introduces a novel texture descriptor, the Eigen-transform. The transform provides a measure of roughness by considering the eigenvalues of a matrix which is formed very simply by inserting the greyvalues of a square patch around a pixel directly into a matrix of the same size. The eigenvalue of largest magnitude turns out to give a smoothed version of the original image, but the eigenvalues of smaller magnitude encode high frequency information characteristic of natural textures. A major advantage of the Eigen-transform is that it does not fire on straight, or locally straight, brightness edges, instead it reacts almost entirely to the texture itself. This is in contrast to many other descriptors such as Gabor filters or the standard deviation of greyvalues of the patch. These properties make it remarkably well suited to practical applications. Our experiments focus on two main areas. The first is in bottom-up visual attention where textured objects pop out from the background using the Eigen-transform. The second is unsupervised texture segmentation with particular emphasis on real-world, cluttered indoor environments. We compare results with other state-of-the-art methods and find that the Eigen-transform is highly competitive, despite its simplicity and low dimensionality.", 
    "editor": [
      {
        "familyName": "Narayanan", 
        "givenName": "P. J.", 
        "type": "Person"
      }, 
      {
        "familyName": "Nayar", 
        "givenName": "Shree K.", 
        "type": "Person"
      }, 
      {
        "familyName": "Shum", 
        "givenName": "Heung-Yeung", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/11612032_8", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-540-31219-2", 
        "978-3-540-32433-1"
      ], 
      "name": "Computer Vision \u2013 ACCV 2006", 
      "type": "Book"
    }, 
    "keywords": [
      "novel texture descriptor", 
      "unsupervised texture segmentation", 
      "texture descriptors", 
      "original image", 
      "art methods", 
      "Gabor filters", 
      "texture segmentation", 
      "indoor environment", 
      "information characteristics", 
      "natural textures", 
      "low dimensionality", 
      "descriptors", 
      "visual attention", 
      "smoothed version", 
      "segmentation", 
      "greyvalues", 
      "practical applications", 
      "major advantage", 
      "pixels", 
      "applications", 
      "images", 
      "dimensionality", 
      "objects", 
      "main areas", 
      "texture", 
      "environment", 
      "brightness edge", 
      "simplicity", 
      "transform", 
      "version", 
      "advantages", 
      "same size", 
      "filter", 
      "edge", 
      "experiments", 
      "method", 
      "patches", 
      "matrix", 
      "measure of roughness", 
      "attention", 
      "state", 
      "particular emphasis", 
      "results", 
      "area", 
      "background", 
      "measures", 
      "characteristics", 
      "standard deviation", 
      "emphasis", 
      "eigenvalues", 
      "square patch", 
      "size", 
      "deviation", 
      "magnitude", 
      "large magnitude", 
      "properties", 
      "contrast", 
      "small magnitude", 
      "roughness", 
      "paper"
    ], 
    "name": "The Eigen-Transform and Applications", 
    "pagination": "70-79", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1009726117"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/11612032_8"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/11612032_8", 
      "https://app.dimensions.ai/details/publication/pub.1009726117"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-11-24T21:14", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/chapter/chapter_244.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/11612032_8"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/11612032_8'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/11612032_8'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/11612032_8'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/11612032_8'


 

This table displays all metadata directly associated to this object as RDF triples.

153 TRIPLES      22 PREDICATES      85 URIs      78 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/11612032_8 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N824a250d8b914927bc8cbb8c04b3007d
4 schema:datePublished 2006
5 schema:datePublishedReg 2006-01-01
6 schema:description This paper introduces a novel texture descriptor, the Eigen-transform. The transform provides a measure of roughness by considering the eigenvalues of a matrix which is formed very simply by inserting the greyvalues of a square patch around a pixel directly into a matrix of the same size. The eigenvalue of largest magnitude turns out to give a smoothed version of the original image, but the eigenvalues of smaller magnitude encode high frequency information characteristic of natural textures. A major advantage of the Eigen-transform is that it does not fire on straight, or locally straight, brightness edges, instead it reacts almost entirely to the texture itself. This is in contrast to many other descriptors such as Gabor filters or the standard deviation of greyvalues of the patch. These properties make it remarkably well suited to practical applications. Our experiments focus on two main areas. The first is in bottom-up visual attention where textured objects pop out from the background using the Eigen-transform. The second is unsupervised texture segmentation with particular emphasis on real-world, cluttered indoor environments. We compare results with other state-of-the-art methods and find that the Eigen-transform is highly competitive, despite its simplicity and low dimensionality.
7 schema:editor Nd1b7cf2fc5d94bea994821a75a48b2ed
8 schema:genre chapter
9 schema:isAccessibleForFree false
10 schema:isPartOf N29caa81d26d34e41b23dd21b9e75bd8c
11 schema:keywords Gabor filters
12 advantages
13 applications
14 area
15 art methods
16 attention
17 background
18 brightness edge
19 characteristics
20 contrast
21 descriptors
22 deviation
23 dimensionality
24 edge
25 eigenvalues
26 emphasis
27 environment
28 experiments
29 filter
30 greyvalues
31 images
32 indoor environment
33 information characteristics
34 large magnitude
35 low dimensionality
36 magnitude
37 main areas
38 major advantage
39 matrix
40 measure of roughness
41 measures
42 method
43 natural textures
44 novel texture descriptor
45 objects
46 original image
47 paper
48 particular emphasis
49 patches
50 pixels
51 practical applications
52 properties
53 results
54 roughness
55 same size
56 segmentation
57 simplicity
58 size
59 small magnitude
60 smoothed version
61 square patch
62 standard deviation
63 state
64 texture
65 texture descriptors
66 texture segmentation
67 transform
68 unsupervised texture segmentation
69 version
70 visual attention
71 schema:name The Eigen-Transform and Applications
72 schema:pagination 70-79
73 schema:productId Nb41c313ccb6940b4ae4d558540282a25
74 Nc2e9fff3ce874a56b9e1321bf167ec8b
75 schema:publisher N6ef16d3ebe164f4bb0b485816e3a5260
76 schema:sameAs https://app.dimensions.ai/details/publication/pub.1009726117
77 https://doi.org/10.1007/11612032_8
78 schema:sdDatePublished 2022-11-24T21:14
79 schema:sdLicense https://scigraph.springernature.com/explorer/license/
80 schema:sdPublisher Nd7071f4df8754d6cba6fdcfe6003bae9
81 schema:url https://doi.org/10.1007/11612032_8
82 sgo:license sg:explorer/license/
83 sgo:sdDataset chapters
84 rdf:type schema:Chapter
85 N29caa81d26d34e41b23dd21b9e75bd8c schema:isbn 978-3-540-31219-2
86 978-3-540-32433-1
87 schema:name Computer Vision – ACCV 2006
88 rdf:type schema:Book
89 N3c6981cc7d9b45bbbf49e86d218b628c rdf:first sg:person.010365113571.00
90 rdf:rest rdf:nil
91 N4a897791ec124ec9bf29d962945367fa rdf:first N98f8fc08380f4b14ad26f60ad3a4dd44
92 rdf:rest Ne0e8b1a7cb9f49e2b9ef0195eebaac48
93 N6ef16d3ebe164f4bb0b485816e3a5260 schema:name Springer Nature
94 rdf:type schema:Organisation
95 N824a250d8b914927bc8cbb8c04b3007d rdf:first sg:person.011760054571.70
96 rdf:rest Nd5c11d4acc9a4ed5a2355f0df4cd5bed
97 N98f8fc08380f4b14ad26f60ad3a4dd44 schema:familyName Nayar
98 schema:givenName Shree K.
99 rdf:type schema:Person
100 Na8c763d1744e436c8362ff1763a2865a schema:familyName Narayanan
101 schema:givenName P. J.
102 rdf:type schema:Person
103 Naa02a4d0a64e4cd1a2b5f6190477a901 rdf:first sg:person.014400652155.17
104 rdf:rest N3c6981cc7d9b45bbbf49e86d218b628c
105 Naa7f8ea6a7dd488da0dbc48fec9972d1 schema:familyName Shum
106 schema:givenName Heung-Yeung
107 rdf:type schema:Person
108 Nb41c313ccb6940b4ae4d558540282a25 schema:name doi
109 schema:value 10.1007/11612032_8
110 rdf:type schema:PropertyValue
111 Nc2e9fff3ce874a56b9e1321bf167ec8b schema:name dimensions_id
112 schema:value pub.1009726117
113 rdf:type schema:PropertyValue
114 Nd1b7cf2fc5d94bea994821a75a48b2ed rdf:first Na8c763d1744e436c8362ff1763a2865a
115 rdf:rest N4a897791ec124ec9bf29d962945367fa
116 Nd5c11d4acc9a4ed5a2355f0df4cd5bed rdf:first sg:person.010203264647.00
117 rdf:rest Naa02a4d0a64e4cd1a2b5f6190477a901
118 Nd7071f4df8754d6cba6fdcfe6003bae9 schema:name Springer Nature - SN SciGraph project
119 rdf:type schema:Organization
120 Ne0e8b1a7cb9f49e2b9ef0195eebaac48 rdf:first Naa7f8ea6a7dd488da0dbc48fec9972d1
121 rdf:rest rdf:nil
122 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
123 schema:name Information and Computing Sciences
124 rdf:type schema:DefinedTerm
125 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
126 schema:name Artificial Intelligence and Image Processing
127 rdf:type schema:DefinedTerm
128 sg:person.010203264647.00 schema:affiliation grid-institutes:grid.5037.1
129 schema:familyName Hayman
130 schema:givenName Eric
131 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010203264647.00
132 rdf:type schema:Person
133 sg:person.010365113571.00 schema:affiliation grid-institutes:grid.418744.a
134 schema:familyName Shahshahani
135 schema:givenName Mehrdad
136 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010365113571.00
137 rdf:type schema:Person
138 sg:person.011760054571.70 schema:affiliation grid-institutes:grid.5037.1
139 schema:familyName Targhi
140 schema:givenName Alireza Tavakoli
141 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011760054571.70
142 rdf:type schema:Person
143 sg:person.014400652155.17 schema:affiliation grid-institutes:grid.5037.1
144 schema:familyName Eklundh
145 schema:givenName Jan-Olof
146 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
147 rdf:type schema:Person
148 grid-institutes:grid.418744.a schema:alternateName Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran, Iran
149 schema:name Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran, Iran
150 rdf:type schema:Organization
151 grid-institutes:grid.5037.1 schema:alternateName Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden
152 schema:name Computational Vision and Active Perception Laboratory, School of Computer Science and Communication, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Sweden
153 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...