Ontology type: schema:Chapter
2021
AUTHORSTakeru Aoki , Keiki Takadama , Hiroyuki Sato
ABSTRACTThis work proposes a double-layered cortical learning algorithm. The cortical learning algorithm is a time-series prediction methodology inspired from the human neuro-cortex. The human neuro-cortex has a multi-layer structure, while the conventional cortical learning algorithm has a single layer structure. This work introduces a double-layered structure into the cortical learning algorithm. The first layer represents the input data and its context every time-step. The input data context presentation in the first layer is transferred to the second layer, and it is represented in the second layer as an abstract representation. Also, the abstract prediction in the second layer is reflected to the first layer to modify and enhance the prediction in the first layer. The experimental results show that the proposed double-layered cortical learning algorithm achieves higher prediction accuracy than the conventional single-layered cortical learning algorithms and the recurrent neural networks with the long short-term memory on several artificial time-series data. More... »
PAGES33-44
Bio-Inspired Information and Communications Technologies
ISBN
978-3-030-92162-0
978-3-030-92163-7
http://scigraph.springernature.com/pub.10.1007/978-3-030-92163-7_4
DOIhttp://dx.doi.org/10.1007/978-3-030-92163-7_4
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1143570240
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan",
"id": "http://www.grid.ac/institutes/grid.266298.1",
"name": [
"The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan"
],
"type": "Organization"
},
"familyName": "Aoki",
"givenName": "Takeru",
"id": "sg:person.015162262761.13",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015162262761.13"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan",
"id": "http://www.grid.ac/institutes/grid.266298.1",
"name": [
"The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan"
],
"type": "Organization"
},
"familyName": "Takadama",
"givenName": "Keiki",
"id": "sg:person.012774267611.99",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012774267611.99"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan",
"id": "http://www.grid.ac/institutes/grid.266298.1",
"name": [
"The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan"
],
"type": "Organization"
},
"familyName": "Sato",
"givenName": "Hiroyuki",
"id": "sg:person.07750750604.05",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07750750604.05"
],
"type": "Person"
}
],
"datePublished": "2021",
"datePublishedReg": "2021-01-01",
"description": "This work proposes a double-layered cortical learning algorithm. The cortical learning algorithm is a time-series prediction methodology inspired from the human neuro-cortex. The human neuro-cortex has a multi-layer structure, while the conventional cortical learning algorithm has a single layer structure. This work introduces a double-layered structure into the cortical learning algorithm. The first layer represents the input data and its context every time-step. The input data context presentation in the first layer is transferred to the second layer, and it is represented in the second layer as an abstract representation. Also, the abstract prediction in the second layer is reflected to the first layer to modify and enhance the prediction in the first layer. The experimental results show that the proposed double-layered cortical learning algorithm achieves higher prediction accuracy than the conventional single-layered cortical learning algorithms and the recurrent neural networks with the long short-term memory on several artificial time-series data.",
"editor": [
{
"familyName": "Nakano",
"givenName": "Tadashi",
"type": "Person"
}
],
"genre": "chapter",
"id": "sg:pub.10.1007/978-3-030-92163-7_4",
"inLanguage": "en",
"isAccessibleForFree": false,
"isPartOf": {
"isbn": [
"978-3-030-92162-0",
"978-3-030-92163-7"
],
"name": "Bio-Inspired Information and Communications Technologies",
"type": "Book"
},
"keywords": [
"cortical learning algorithm",
"learning algorithm",
"long short-term memory",
"artificial time-series data",
"recurrent neural network",
"time series prediction",
"conventional cortical learning algorithm",
"first layer",
"high prediction accuracy",
"neural network",
"second layer",
"short-term memory",
"abstract representation",
"time series data",
"input data",
"algorithm",
"prediction accuracy",
"context presentation",
"experimental results",
"prediction methodology",
"Abstract Prediction",
"network",
"multi-layer structure",
"prediction",
"accuracy",
"representation",
"work",
"data",
"memory",
"methodology",
"single-layer structure",
"context",
"layer",
"structure",
"results",
"presentation",
"layer structure",
"double-layered structure"
],
"name": "Double-Layered Cortical Learning Algorithm for Time-Series Prediction",
"pagination": "33-44",
"productId": [
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1143570240"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/978-3-030-92163-7_4"
]
}
],
"publisher": {
"name": "Springer Nature",
"type": "Organisation"
},
"sameAs": [
"https://doi.org/10.1007/978-3-030-92163-7_4",
"https://app.dimensions.ai/details/publication/pub.1143570240"
],
"sdDataset": "chapters",
"sdDatePublished": "2022-05-20T07:44",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/chapter/chapter_259.jsonl",
"type": "Chapter",
"url": "https://doi.org/10.1007/978-3-030-92163-7_4"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-92163-7_4'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-92163-7_4'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-92163-7_4'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-92163-7_4'
This table displays all metadata directly associated to this object as RDF triples.
112 TRIPLES
23 PREDICATES
64 URIs
57 LITERALS
7 BLANK NODES
Subject | Predicate | Object | |
---|---|---|---|
1 | sg:pub.10.1007/978-3-030-92163-7_4 | schema:about | anzsrc-for:08 |
2 | ″ | ″ | anzsrc-for:0801 |
3 | ″ | schema:author | Nb4dbe7bddb73470b99c9ab2edf9a7d89 |
4 | ″ | schema:datePublished | 2021 |
5 | ″ | schema:datePublishedReg | 2021-01-01 |
6 | ″ | schema:description | This work proposes a double-layered cortical learning algorithm. The cortical learning algorithm is a time-series prediction methodology inspired from the human neuro-cortex. The human neuro-cortex has a multi-layer structure, while the conventional cortical learning algorithm has a single layer structure. This work introduces a double-layered structure into the cortical learning algorithm. The first layer represents the input data and its context every time-step. The input data context presentation in the first layer is transferred to the second layer, and it is represented in the second layer as an abstract representation. Also, the abstract prediction in the second layer is reflected to the first layer to modify and enhance the prediction in the first layer. The experimental results show that the proposed double-layered cortical learning algorithm achieves higher prediction accuracy than the conventional single-layered cortical learning algorithms and the recurrent neural networks with the long short-term memory on several artificial time-series data. |
7 | ″ | schema:editor | N513641ced32c49b79b361b7c7c34e769 |
8 | ″ | schema:genre | chapter |
9 | ″ | schema:inLanguage | en |
10 | ″ | schema:isAccessibleForFree | false |
11 | ″ | schema:isPartOf | N5738762a2bcd405d89219215ff3b402f |
12 | ″ | schema:keywords | Abstract Prediction |
13 | ″ | ″ | abstract representation |
14 | ″ | ″ | accuracy |
15 | ″ | ″ | algorithm |
16 | ″ | ″ | artificial time-series data |
17 | ″ | ″ | context |
18 | ″ | ″ | context presentation |
19 | ″ | ″ | conventional cortical learning algorithm |
20 | ″ | ″ | cortical learning algorithm |
21 | ″ | ″ | data |
22 | ″ | ″ | double-layered structure |
23 | ″ | ″ | experimental results |
24 | ″ | ″ | first layer |
25 | ″ | ″ | high prediction accuracy |
26 | ″ | ″ | input data |
27 | ″ | ″ | layer |
28 | ″ | ″ | layer structure |
29 | ″ | ″ | learning algorithm |
30 | ″ | ″ | long short-term memory |
31 | ″ | ″ | memory |
32 | ″ | ″ | methodology |
33 | ″ | ″ | multi-layer structure |
34 | ″ | ″ | network |
35 | ″ | ″ | neural network |
36 | ″ | ″ | prediction |
37 | ″ | ″ | prediction accuracy |
38 | ″ | ″ | prediction methodology |
39 | ″ | ″ | presentation |
40 | ″ | ″ | recurrent neural network |
41 | ″ | ″ | representation |
42 | ″ | ″ | results |
43 | ″ | ″ | second layer |
44 | ″ | ″ | short-term memory |
45 | ″ | ″ | single-layer structure |
46 | ″ | ″ | structure |
47 | ″ | ″ | time series data |
48 | ″ | ″ | time series prediction |
49 | ″ | ″ | work |
50 | ″ | schema:name | Double-Layered Cortical Learning Algorithm for Time-Series Prediction |
51 | ″ | schema:pagination | 33-44 |
52 | ″ | schema:productId | Ncd458d64ff014983b0ecf52f70b1a2fc |
53 | ″ | ″ | Nda581dc7b2a546eb8ae904c3592cee2d |
54 | ″ | schema:publisher | Nba1612c7723f403e9f19b4af57510f99 |
55 | ″ | schema:sameAs | https://app.dimensions.ai/details/publication/pub.1143570240 |
56 | ″ | ″ | https://doi.org/10.1007/978-3-030-92163-7_4 |
57 | ″ | schema:sdDatePublished | 2022-05-20T07:44 |
58 | ″ | schema:sdLicense | https://scigraph.springernature.com/explorer/license/ |
59 | ″ | schema:sdPublisher | N307567896b83423989f1ab3a10a91da2 |
60 | ″ | schema:url | https://doi.org/10.1007/978-3-030-92163-7_4 |
61 | ″ | sgo:license | sg:explorer/license/ |
62 | ″ | sgo:sdDataset | chapters |
63 | ″ | rdf:type | schema:Chapter |
64 | N307567896b83423989f1ab3a10a91da2 | schema:name | Springer Nature - SN SciGraph project |
65 | ″ | rdf:type | schema:Organization |
66 | N513641ced32c49b79b361b7c7c34e769 | rdf:first | Ncee6416deba94229bf9f1aae130f4bdf |
67 | ″ | rdf:rest | rdf:nil |
68 | N5738762a2bcd405d89219215ff3b402f | schema:isbn | 978-3-030-92162-0 |
69 | ″ | ″ | 978-3-030-92163-7 |
70 | ″ | schema:name | Bio-Inspired Information and Communications Technologies |
71 | ″ | rdf:type | schema:Book |
72 | Nb4dbe7bddb73470b99c9ab2edf9a7d89 | rdf:first | sg:person.015162262761.13 |
73 | ″ | rdf:rest | Ne84d8df5d30d44c7a096af2d3f213fce |
74 | Nba1612c7723f403e9f19b4af57510f99 | schema:name | Springer Nature |
75 | ″ | rdf:type | schema:Organisation |
76 | Ncd458d64ff014983b0ecf52f70b1a2fc | schema:name | dimensions_id |
77 | ″ | schema:value | pub.1143570240 |
78 | ″ | rdf:type | schema:PropertyValue |
79 | Ncee6416deba94229bf9f1aae130f4bdf | schema:familyName | Nakano |
80 | ″ | schema:givenName | Tadashi |
81 | ″ | rdf:type | schema:Person |
82 | Nda581dc7b2a546eb8ae904c3592cee2d | schema:name | doi |
83 | ″ | schema:value | 10.1007/978-3-030-92163-7_4 |
84 | ″ | rdf:type | schema:PropertyValue |
85 | Ne437f75448814d988422cec55feae76d | rdf:first | sg:person.07750750604.05 |
86 | ″ | rdf:rest | rdf:nil |
87 | Ne84d8df5d30d44c7a096af2d3f213fce | rdf:first | sg:person.012774267611.99 |
88 | ″ | rdf:rest | Ne437f75448814d988422cec55feae76d |
89 | anzsrc-for:08 | schema:inDefinedTermSet | anzsrc-for: |
90 | ″ | schema:name | Information and Computing Sciences |
91 | ″ | rdf:type | schema:DefinedTerm |
92 | anzsrc-for:0801 | schema:inDefinedTermSet | anzsrc-for: |
93 | ″ | schema:name | Artificial Intelligence and Image Processing |
94 | ″ | rdf:type | schema:DefinedTerm |
95 | sg:person.012774267611.99 | schema:affiliation | grid-institutes:grid.266298.1 |
96 | ″ | schema:familyName | Takadama |
97 | ″ | schema:givenName | Keiki |
98 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012774267611.99 |
99 | ″ | rdf:type | schema:Person |
100 | sg:person.015162262761.13 | schema:affiliation | grid-institutes:grid.266298.1 |
101 | ″ | schema:familyName | Aoki |
102 | ″ | schema:givenName | Takeru |
103 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015162262761.13 |
104 | ″ | rdf:type | schema:Person |
105 | sg:person.07750750604.05 | schema:affiliation | grid-institutes:grid.266298.1 |
106 | ″ | schema:familyName | Sato |
107 | ″ | schema:givenName | Hiroyuki |
108 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07750750604.05 |
109 | ″ | rdf:type | schema:Person |
110 | grid-institutes:grid.266298.1 | schema:alternateName | The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan |
111 | ″ | schema:name | The University of Electro-Communications, 1-5-1 Chofugaoka, 182-8585, Chofu, Tokyo, Japan |
112 | ″ | rdf:type | schema:Organization |