Ontology type: schema:Chapter Open Access: True
2000
AUTHORSBhaskar DasGupta , Barbara Hammer
ABSTRACTWe consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions.First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3],[5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and å-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function.Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard. More... »
PAGES264-278
Algorithmic Learning Theory
ISBN
978-3-540-41237-3
978-3-540-40992-2
http://scigraph.springernature.com/pub.10.1007/3-540-40992-0_20
DOIhttp://dx.doi.org/10.1007/3-540-40992-0_20
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1028452795
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/01",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Mathematical Sciences",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0103",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Numerical and Computational Mathematics",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "Department of Computer Science, Rutgers University, Camden, 08102, NJ, USA",
"id": "http://www.grid.ac/institutes/grid.430387.b",
"name": [
"Department of Computer Science, Rutgers University, Camden, 08102, NJ, USA"
],
"type": "Organization"
},
"familyName": "DasGupta",
"givenName": "Bhaskar",
"id": "sg:person.0763403270.10",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0763403270.10"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Department of Mathematics/Computer Science, University of Osnabr\u00fcck, D-49069, Osnabr\u00fcck, Germany",
"id": "http://www.grid.ac/institutes/grid.10854.38",
"name": [
"Department of Mathematics/Computer Science, University of Osnabr\u00fcck, D-49069, Osnabr\u00fcck, Germany"
],
"type": "Organization"
},
"familyName": "Hammer",
"givenName": "Barbara",
"id": "sg:person.0650107566.36",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0650107566.36"
],
"type": "Person"
}
],
"datePublished": "2000",
"datePublishedReg": "2000-01-01",
"description": "We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions.First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3],[5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and \u00e5-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function.Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.",
"editor": [
{
"familyName": "Arimura",
"givenName": "Hiroki",
"type": "Person"
},
{
"familyName": "Jain",
"givenName": "Sanjay",
"type": "Person"
},
{
"familyName": "Sharma",
"givenName": "Arun",
"type": "Person"
}
],
"genre": "chapter",
"id": "sg:pub.10.1007/3-540-40992-0_20",
"inLanguage": "en",
"isAccessibleForFree": true,
"isPartOf": {
"isbn": [
"978-3-540-41237-3",
"978-3-540-40992-2"
],
"name": "Algorithmic Learning Theory",
"type": "Book"
},
"keywords": [
"sigmoid-like activation function",
"objective function",
"input dimension",
"approximation",
"number of examples",
"activation function",
"weak approximation",
"approximate learning",
"circuit",
"function",
"NPs",
"hidden layer",
"threshold function",
"problem",
"feedforward circuit",
"objective",
"ratio",
"point",
"threshold circuits",
"nodes",
"dimensions",
"number",
"respect",
"layer",
"output",
"failure ratio",
"input biases",
"Multi",
"training",
"size",
"N3",
"example",
"architecture",
"factors",
"separation",
"biases",
"learning"
],
"name": "On Approximate Learning by Multi-layered Feedforward Circuits",
"pagination": "264-278",
"productId": [
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1028452795"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/3-540-40992-0_20"
]
}
],
"publisher": {
"name": "Springer Nature",
"type": "Organisation"
},
"sameAs": [
"https://doi.org/10.1007/3-540-40992-0_20",
"https://app.dimensions.ai/details/publication/pub.1028452795"
],
"sdDataset": "chapters",
"sdDatePublished": "2022-05-20T07:44",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/chapter/chapter_277.jsonl",
"type": "Chapter",
"url": "https://doi.org/10.1007/3-540-40992-0_20"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/3-540-40992-0_20'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/3-540-40992-0_20'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/3-540-40992-0_20'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/3-540-40992-0_20'
This table displays all metadata directly associated to this object as RDF triples.
117 TRIPLES
23 PREDICATES
63 URIs
56 LITERALS
7 BLANK NODES
Subject | Predicate | Object | |
---|---|---|---|
1 | sg:pub.10.1007/3-540-40992-0_20 | schema:about | anzsrc-for:01 |
2 | ″ | ″ | anzsrc-for:0103 |
3 | ″ | schema:author | N65c2a93aa1a34b2b98800ed1a3672147 |
4 | ″ | schema:datePublished | 2000 |
5 | ″ | schema:datePublishedReg | 2000-01-01 |
6 | ″ | schema:description | We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions.First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3],[5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and å-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function.Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard. |
7 | ″ | schema:editor | N96a74c659b144f609699a01b2c4a674c |
8 | ″ | schema:genre | chapter |
9 | ″ | schema:inLanguage | en |
10 | ″ | schema:isAccessibleForFree | true |
11 | ″ | schema:isPartOf | Ne385e31f78414952a15bf015d48f81c6 |
12 | ″ | schema:keywords | Multi |
13 | ″ | ″ | N3 |
14 | ″ | ″ | NPs |
15 | ″ | ″ | activation function |
16 | ″ | ″ | approximate learning |
17 | ″ | ″ | approximation |
18 | ″ | ″ | architecture |
19 | ″ | ″ | biases |
20 | ″ | ″ | circuit |
21 | ″ | ″ | dimensions |
22 | ″ | ″ | example |
23 | ″ | ″ | factors |
24 | ″ | ″ | failure ratio |
25 | ″ | ″ | feedforward circuit |
26 | ″ | ″ | function |
27 | ″ | ″ | hidden layer |
28 | ″ | ″ | input biases |
29 | ″ | ″ | input dimension |
30 | ″ | ″ | layer |
31 | ″ | ″ | learning |
32 | ″ | ″ | nodes |
33 | ″ | ″ | number |
34 | ″ | ″ | number of examples |
35 | ″ | ″ | objective |
36 | ″ | ″ | objective function |
37 | ″ | ″ | output |
38 | ″ | ″ | point |
39 | ″ | ″ | problem |
40 | ″ | ″ | ratio |
41 | ″ | ″ | respect |
42 | ″ | ″ | separation |
43 | ″ | ″ | sigmoid-like activation function |
44 | ″ | ″ | size |
45 | ″ | ″ | threshold circuits |
46 | ″ | ″ | threshold function |
47 | ″ | ″ | training |
48 | ″ | ″ | weak approximation |
49 | ″ | schema:name | On Approximate Learning by Multi-layered Feedforward Circuits |
50 | ″ | schema:pagination | 264-278 |
51 | ″ | schema:productId | N2c0cc829f82447cfb566e3eaf35fe270 |
52 | ″ | ″ | N731e36735c884c7f82a39a70fffc7d2e |
53 | ″ | schema:publisher | Nfa26a8783fda4d8298b5c5e33ce04b41 |
54 | ″ | schema:sameAs | https://app.dimensions.ai/details/publication/pub.1028452795 |
55 | ″ | ″ | https://doi.org/10.1007/3-540-40992-0_20 |
56 | ″ | schema:sdDatePublished | 2022-05-20T07:44 |
57 | ″ | schema:sdLicense | https://scigraph.springernature.com/explorer/license/ |
58 | ″ | schema:sdPublisher | Nab7ebcc3a9054d1490ceb632f1881bd8 |
59 | ″ | schema:url | https://doi.org/10.1007/3-540-40992-0_20 |
60 | ″ | sgo:license | sg:explorer/license/ |
61 | ″ | sgo:sdDataset | chapters |
62 | ″ | rdf:type | schema:Chapter |
63 | N09c564777d6a448ba166f030eb5891ab | rdf:first | sg:person.0650107566.36 |
64 | ″ | rdf:rest | rdf:nil |
65 | N11e23f32129449a998bed05f7696a56c | rdf:first | N7cdda018451d42f2bc506b2ea29aa9dc |
66 | ″ | rdf:rest | rdf:nil |
67 | N2c0cc829f82447cfb566e3eaf35fe270 | schema:name | doi |
68 | ″ | schema:value | 10.1007/3-540-40992-0_20 |
69 | ″ | rdf:type | schema:PropertyValue |
70 | N65c2a93aa1a34b2b98800ed1a3672147 | rdf:first | sg:person.0763403270.10 |
71 | ″ | rdf:rest | N09c564777d6a448ba166f030eb5891ab |
72 | N6fadaea4438241c1b6f3260d171cf2a5 | rdf:first | Ne04d934f2266408d9861df7f5696aeb3 |
73 | ″ | rdf:rest | N11e23f32129449a998bed05f7696a56c |
74 | N731e36735c884c7f82a39a70fffc7d2e | schema:name | dimensions_id |
75 | ″ | schema:value | pub.1028452795 |
76 | ″ | rdf:type | schema:PropertyValue |
77 | N7cdda018451d42f2bc506b2ea29aa9dc | schema:familyName | Sharma |
78 | ″ | schema:givenName | Arun |
79 | ″ | rdf:type | schema:Person |
80 | N96a74c659b144f609699a01b2c4a674c | rdf:first | Nc829b63ada6f4deeabb0d3cbf3c0986f |
81 | ″ | rdf:rest | N6fadaea4438241c1b6f3260d171cf2a5 |
82 | Nab7ebcc3a9054d1490ceb632f1881bd8 | schema:name | Springer Nature - SN SciGraph project |
83 | ″ | rdf:type | schema:Organization |
84 | Nc829b63ada6f4deeabb0d3cbf3c0986f | schema:familyName | Arimura |
85 | ″ | schema:givenName | Hiroki |
86 | ″ | rdf:type | schema:Person |
87 | Ne04d934f2266408d9861df7f5696aeb3 | schema:familyName | Jain |
88 | ″ | schema:givenName | Sanjay |
89 | ″ | rdf:type | schema:Person |
90 | Ne385e31f78414952a15bf015d48f81c6 | schema:isbn | 978-3-540-40992-2 |
91 | ″ | ″ | 978-3-540-41237-3 |
92 | ″ | schema:name | Algorithmic Learning Theory |
93 | ″ | rdf:type | schema:Book |
94 | Nfa26a8783fda4d8298b5c5e33ce04b41 | schema:name | Springer Nature |
95 | ″ | rdf:type | schema:Organisation |
96 | anzsrc-for:01 | schema:inDefinedTermSet | anzsrc-for: |
97 | ″ | schema:name | Mathematical Sciences |
98 | ″ | rdf:type | schema:DefinedTerm |
99 | anzsrc-for:0103 | schema:inDefinedTermSet | anzsrc-for: |
100 | ″ | schema:name | Numerical and Computational Mathematics |
101 | ″ | rdf:type | schema:DefinedTerm |
102 | sg:person.0650107566.36 | schema:affiliation | grid-institutes:grid.10854.38 |
103 | ″ | schema:familyName | Hammer |
104 | ″ | schema:givenName | Barbara |
105 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0650107566.36 |
106 | ″ | rdf:type | schema:Person |
107 | sg:person.0763403270.10 | schema:affiliation | grid-institutes:grid.430387.b |
108 | ″ | schema:familyName | DasGupta |
109 | ″ | schema:givenName | Bhaskar |
110 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0763403270.10 |
111 | ″ | rdf:type | schema:Person |
112 | grid-institutes:grid.10854.38 | schema:alternateName | Department of Mathematics/Computer Science, University of Osnabrück, D-49069, Osnabrück, Germany |
113 | ″ | schema:name | Department of Mathematics/Computer Science, University of Osnabrück, D-49069, Osnabrück, Germany |
114 | ″ | rdf:type | schema:Organization |
115 | grid-institutes:grid.430387.b | schema:alternateName | Department of Computer Science, Rutgers University, Camden, 08102, NJ, USA |
116 | ″ | schema:name | Department of Computer Science, Rutgers University, Camden, 08102, NJ, USA |
117 | ″ | rdf:type | schema:Organization |