Tracking Context Changes through Meta-Learning View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

1997-06

AUTHORS

Gerhard Widmer

ABSTRACT

The article deals with the problem of learning incrementally (‘on-line’) in domains where the target concepts are context-dependent, so that changes in context can produce more or less radical changes in the associated concepts. In particular, we concentrate on a class of learning tasks where the domain provides explicit clues as to the current context (e.g., attributes with characteristic values). A general two-level learning model is presented that effectively adjusts to changing contexts by trying to detect (via ‘meta-learning’) contextual clues and using this information to focus the learning process. Context learning and detection occur during regular on-line learning, without separate training phases for context recognition. Two operational systems based on this model are presented that differ in the underlying learning algorithm and in the way they use contextual information: METAL(B) combines meta-learning with a Bayesian classifier, while METAL(IB) is based on an instance-based learning algorithm. Experiments with synthetic domains as well as a number of ‘real-world’ problems show that the algorithms are robust in a variety of dimensions, and that meta-learning can produce substantial increases in accuracy over simple object-level learning in situations with changing contexts. More... »

PAGES

259-286

References to SciGraph publications

Journal

TITLE

Machine Learning

ISSUE

3

VOLUME

27

Author Affiliations

Identifiers

URI

http://scigraph.springernature.com/pub.10.1023/a:1007365809034

DOI

http://dx.doi.org/10.1023/a:1007365809034

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1017758049


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology and Cognitive Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of Vienna", 
          "id": "https://www.grid.ac/institutes/grid.10420.37", 
          "name": [
            "Department of Medical Cybernetics and AI, University of Vienna, and Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010, Vienna, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Widmer", 
        "givenName": "Gerhard", 
        "id": "sg:person.013641401431.40", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013641401431.40"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/bf00994004", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1012729764", 
          "https://doi.org/10.1007/bf00994004"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/b978-1-55860-307-3.50042-3", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1018885226"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00116900", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1025237168", 
          "https://doi.org/10.1007/bf00116900"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00116895", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1029305571", 
          "https://doi.org/10.1007/bf00116895"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-56602-3_139", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1036804921", 
          "https://doi.org/10.1007/3-540-56602-3_139"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-56602-3_158", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042322876", 
          "https://doi.org/10.1007/3-540-56602-3_158"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0167-8655(89)90092-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1044019720"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1162/neco.1990.2.4.472", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1044204967"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/b978-0-934613-41-5.50009-x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1045499032"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00871892", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1048062584", 
          "https://doi.org/10.1007/bf00871892"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00153759", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1049631378", 
          "https://doi.org/10.1007/bf00153759"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00153759", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1049631378", 
          "https://doi.org/10.1007/bf00153759"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/72.182692", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061218313"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.21236/ada294075", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1091583812"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.21236/ada285342", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1091587277"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "1997-06", 
    "datePublishedReg": "1997-06-01", 
    "description": "The article deals with the problem of learning incrementally (\u2018on-line\u2019) in domains where the target concepts are context-dependent, so that changes in context can produce more or less radical changes in the associated concepts. In particular, we concentrate on a class of learning tasks where the domain provides explicit clues as to the current context (e.g., attributes with characteristic values). A general two-level learning model is presented that effectively adjusts to changing contexts by trying to detect (via \u2018meta-learning\u2019) contextual clues and using this information to focus the learning process. Context learning and detection occur during regular on-line learning, without separate training phases for context recognition. Two operational systems based on this model are presented that differ in the underlying learning algorithm and in the way they use contextual information: METAL(B) combines meta-learning with a Bayesian classifier, while METAL(IB) is based on an instance-based learning algorithm. Experiments with synthetic domains as well as a number of \u2018real-world\u2019 problems show that the algorithms are robust in a variety of dimensions, and that meta-learning can produce substantial increases in accuracy over simple object-level learning in situations with changing contexts.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1023/a:1007365809034", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": [
      {
        "id": "sg:journal.1125588", 
        "issn": [
          "0885-6125", 
          "1573-0565"
        ], 
        "name": "Machine Learning", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "3", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "27"
      }
    ], 
    "name": "Tracking Context Changes through Meta-Learning", 
    "pagination": "259-286", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "549c64dd98066baf5cd666fc91e34d62193e15874cf66dfba2994866ebf3ef3c"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1023/a:1007365809034"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1017758049"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1023/a:1007365809034", 
      "https://app.dimensions.ai/details/publication/pub.1017758049"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-10T22:29", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8690_00000499.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.1023/A:1007365809034"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1023/a:1007365809034'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1023/a:1007365809034'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1023/a:1007365809034'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1023/a:1007365809034'


 

This table displays all metadata directly associated to this object as RDF triples.

110 TRIPLES      21 PREDICATES      41 URIs      19 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1023/a:1007365809034 schema:about anzsrc-for:17
2 anzsrc-for:1701
3 schema:author N9d61f4d5a14e43cb9f2eb41ccbfa9553
4 schema:citation sg:pub.10.1007/3-540-56602-3_139
5 sg:pub.10.1007/3-540-56602-3_158
6 sg:pub.10.1007/bf00116895
7 sg:pub.10.1007/bf00116900
8 sg:pub.10.1007/bf00153759
9 sg:pub.10.1007/bf00871892
10 sg:pub.10.1007/bf00994004
11 https://doi.org/10.1016/0167-8655(89)90092-5
12 https://doi.org/10.1016/b978-0-934613-41-5.50009-x
13 https://doi.org/10.1016/b978-1-55860-307-3.50042-3
14 https://doi.org/10.1109/72.182692
15 https://doi.org/10.1162/neco.1990.2.4.472
16 https://doi.org/10.21236/ada285342
17 https://doi.org/10.21236/ada294075
18 schema:datePublished 1997-06
19 schema:datePublishedReg 1997-06-01
20 schema:description The article deals with the problem of learning incrementally (‘on-line’) in domains where the target concepts are context-dependent, so that changes in context can produce more or less radical changes in the associated concepts. In particular, we concentrate on a class of learning tasks where the domain provides explicit clues as to the current context (e.g., attributes with characteristic values). A general two-level learning model is presented that effectively adjusts to changing contexts by trying to detect (via ‘meta-learning’) contextual clues and using this information to focus the learning process. Context learning and detection occur during regular on-line learning, without separate training phases for context recognition. Two operational systems based on this model are presented that differ in the underlying learning algorithm and in the way they use contextual information: METAL(B) combines meta-learning with a Bayesian classifier, while METAL(IB) is based on an instance-based learning algorithm. Experiments with synthetic domains as well as a number of ‘real-world’ problems show that the algorithms are robust in a variety of dimensions, and that meta-learning can produce substantial increases in accuracy over simple object-level learning in situations with changing contexts.
21 schema:genre research_article
22 schema:inLanguage en
23 schema:isAccessibleForFree true
24 schema:isPartOf Nb693eda6303445649a70023b6379719e
25 Nb829ad7710534c70a5b2e5212489cf31
26 sg:journal.1125588
27 schema:name Tracking Context Changes through Meta-Learning
28 schema:pagination 259-286
29 schema:productId N4d4d2ffff1f9443d9d6fd7187ada6cc9
30 N6dfd5c382f0d4a93bdad327f38d58993
31 N7bfaadcf30eb4aebb6eb3a834461dd51
32 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017758049
33 https://doi.org/10.1023/a:1007365809034
34 schema:sdDatePublished 2019-04-10T22:29
35 schema:sdLicense https://scigraph.springernature.com/explorer/license/
36 schema:sdPublisher N70879709045c46ab9f570678c65015d3
37 schema:url http://link.springer.com/10.1023/A:1007365809034
38 sgo:license sg:explorer/license/
39 sgo:sdDataset articles
40 rdf:type schema:ScholarlyArticle
41 N4d4d2ffff1f9443d9d6fd7187ada6cc9 schema:name dimensions_id
42 schema:value pub.1017758049
43 rdf:type schema:PropertyValue
44 N6dfd5c382f0d4a93bdad327f38d58993 schema:name doi
45 schema:value 10.1023/a:1007365809034
46 rdf:type schema:PropertyValue
47 N70879709045c46ab9f570678c65015d3 schema:name Springer Nature - SN SciGraph project
48 rdf:type schema:Organization
49 N7bfaadcf30eb4aebb6eb3a834461dd51 schema:name readcube_id
50 schema:value 549c64dd98066baf5cd666fc91e34d62193e15874cf66dfba2994866ebf3ef3c
51 rdf:type schema:PropertyValue
52 N9d61f4d5a14e43cb9f2eb41ccbfa9553 rdf:first sg:person.013641401431.40
53 rdf:rest rdf:nil
54 Nb693eda6303445649a70023b6379719e schema:volumeNumber 27
55 rdf:type schema:PublicationVolume
56 Nb829ad7710534c70a5b2e5212489cf31 schema:issueNumber 3
57 rdf:type schema:PublicationIssue
58 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
59 schema:name Psychology and Cognitive Sciences
60 rdf:type schema:DefinedTerm
61 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
62 schema:name Psychology
63 rdf:type schema:DefinedTerm
64 sg:journal.1125588 schema:issn 0885-6125
65 1573-0565
66 schema:name Machine Learning
67 rdf:type schema:Periodical
68 sg:person.013641401431.40 schema:affiliation https://www.grid.ac/institutes/grid.10420.37
69 schema:familyName Widmer
70 schema:givenName Gerhard
71 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013641401431.40
72 rdf:type schema:Person
73 sg:pub.10.1007/3-540-56602-3_139 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036804921
74 https://doi.org/10.1007/3-540-56602-3_139
75 rdf:type schema:CreativeWork
76 sg:pub.10.1007/3-540-56602-3_158 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042322876
77 https://doi.org/10.1007/3-540-56602-3_158
78 rdf:type schema:CreativeWork
79 sg:pub.10.1007/bf00116895 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029305571
80 https://doi.org/10.1007/bf00116895
81 rdf:type schema:CreativeWork
82 sg:pub.10.1007/bf00116900 schema:sameAs https://app.dimensions.ai/details/publication/pub.1025237168
83 https://doi.org/10.1007/bf00116900
84 rdf:type schema:CreativeWork
85 sg:pub.10.1007/bf00153759 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049631378
86 https://doi.org/10.1007/bf00153759
87 rdf:type schema:CreativeWork
88 sg:pub.10.1007/bf00871892 schema:sameAs https://app.dimensions.ai/details/publication/pub.1048062584
89 https://doi.org/10.1007/bf00871892
90 rdf:type schema:CreativeWork
91 sg:pub.10.1007/bf00994004 schema:sameAs https://app.dimensions.ai/details/publication/pub.1012729764
92 https://doi.org/10.1007/bf00994004
93 rdf:type schema:CreativeWork
94 https://doi.org/10.1016/0167-8655(89)90092-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044019720
95 rdf:type schema:CreativeWork
96 https://doi.org/10.1016/b978-0-934613-41-5.50009-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1045499032
97 rdf:type schema:CreativeWork
98 https://doi.org/10.1016/b978-1-55860-307-3.50042-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018885226
99 rdf:type schema:CreativeWork
100 https://doi.org/10.1109/72.182692 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061218313
101 rdf:type schema:CreativeWork
102 https://doi.org/10.1162/neco.1990.2.4.472 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044204967
103 rdf:type schema:CreativeWork
104 https://doi.org/10.21236/ada285342 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091587277
105 rdf:type schema:CreativeWork
106 https://doi.org/10.21236/ada294075 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091583812
107 rdf:type schema:CreativeWork
108 https://www.grid.ac/institutes/grid.10420.37 schema:alternateName University of Vienna
109 schema:name Department of Medical Cybernetics and AI, University of Vienna, and Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010, Vienna, Austria
110 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...