PUBLICATION DATE

2016-05-07

AUTHORS

Xiong Luo, Changwei Jiang, Xiaona Yang, Xiaojuan Ban

TITLE

Timeliness online regularized extreme learning machine

ISSUE

N/A

VOLUME

N/A

ISSN (print)

1868-8071

ISSN (electronic)

1868-808X

ABSTRACT

To improve the learning performance, a novel online sequential extreme learning machine (ELM) algorithm for single-hidden layer feedforward networks is proposed with regularization mechanism in a unified framework. The proposed algorithm is called timeliness online regularized extreme learning machine (TORELM). Like the timeliness managing extreme learning machine which improves online sequential extreme learning machine by incorporating timeliness management scheme into ELM approach for the incremental training samples, TORELM also analyzes the training data one-by-one or chunk-by-chunk (a block of data) with fixed or varied chunk size under the similar framework. Meanwhile, the newly incremental training data could be prior to the historical data by maximizing the contribution of the newly increasing training data, since in some cases it may be more feasible that the incremental data can contribute reasonable weights to represent the current system situation in accordance with the practical analysis. Furthermore, in consideration of the disproportion between empirical risk and structural risk in some traditional learning methods, we add regularization technique to the timeliness scheme of TORELM through the use of a weight factor to balance them to achieve better generalization performance. Hence, TORELM has its unique feature of higher generalization capability in most cases with a small testing error while implementing online sequential learning. In addition, this algorithm is still competitive in training time compared with other schemes. Finally, the simulation results show that TORELM can achieve higher learning accuracy and better stability than other ELM-based machine learning methods.

How to use: Click on a object to move its position. Double click to open its homepage. Right click to preview its contents.

Download the RDF metadata as:   json-ld nt turtle xml License info


31 TRIPLES      26 PREDICATES      32 URIs      17 LITERALS

Subject Predicate Object
1 articles:62a5ffb0c413f832f1a9f1de1a5c628a sg:abstract Abstract To improve the learning performance, a novel online sequential extreme learning machine (ELM) algorithm for single-hidden layer feedforward networks is proposed with regularization mechanism in a unified framework. The proposed algorithm is called timeliness online regularized extreme learning machine (TORELM). Like the timeliness managing extreme learning machine which improves online sequential extreme learning machine by incorporating timeliness management scheme into ELM approach for the incremental training samples, TORELM also analyzes the training data one-by-one or chunk-by-chunk (a block of data) with fixed or varied chunk size under the similar framework. Meanwhile, the newly incremental training data could be prior to the historical data by maximizing the contribution of the newly increasing training data, since in some cases it may be more feasible that the incremental data can contribute reasonable weights to represent the current system situation in accordance with the practical analysis. Furthermore, in consideration of the disproportion between empirical risk and structural risk in some traditional learning methods, we add regularization technique to the timeliness scheme of TORELM through the use of a weight factor to balance them to achieve better generalization performance. Hence, TORELM has its unique feature of higher generalization capability in most cases with a small testing error while implementing online sequential learning. In addition, this algorithm is still competitive in training time compared with other schemes. Finally, the simulation results show that TORELM can achieve higher learning accuracy and better stability than other ELM-based machine learning methods.
2 sg:articleType OriginalPaper
3 sg:ddsId s13042-016-0544-9
4 sg:ddsIdJournalBrand 13042
5 sg:doi 10.1007/s13042-016-0544-9
6 sg:doiLink http://dx.doi.org/10.1007/s13042-016-0544-9
7 sg:hasContributingOrganization grid-institutes:grid.69775.3a
8 sg:hasContribution contributions:0d49f362fa0cea76ebd96043d46c4b16
9 contributions:7aa6b49b410a53ba89cc1b5fa57ff9e5
10 contributions:dc03a38b82c06d075ce4552d01abca51
11 contributions:ec9fb3f74b3f248e38875388e00d9490
12 sg:hasFieldOfResearchCode anzsrc-for:08
13 anzsrc-for:0801
14 sg:hasJournal journals:23c9613e0e7684c8449242d03cd7b542
15 journals:b9dd939e16e7e8bf79fbe4aa31bfe64a
16 sg:hasJournalBrand journal-brands:385d6a08ab5a795dde4162f583d1f0a9
17 sg:indexingDatabase Web of Science
18 sg:issnElectronic 1868-808X
19 sg:issnPrint 1868-8071
20 sg:language English
21 sg:license http://scigraph.springernature.com/explorer/license/
22 sg:pageEnd 12
23 sg:pageStart 1
24 sg:publicationDate 2016-05-07
25 sg:publicationYear 2016
26 sg:publicationYearMonth 2016-05
27 sg:scigraphId 62a5ffb0c413f832f1a9f1de1a5c628a
28 sg:title Timeliness online regularized extreme learning machine
29 sg:webpage https://link.springer.com/10.1007/s13042-016-0544-9
30 rdf:type sg:Article
31 rdfs:label Article: Timeliness online regularized extreme learning machine
HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular JSON format for linked data.

curl -H 'Accept: application/ld+json' 'http://scigraph.springernature.com/things/articles/62a5ffb0c413f832f1a9f1de1a5c628a'

N-Triples is a line-based linked data format ideal for batch operations .

curl -H 'Accept: application/n-triples' 'http://scigraph.springernature.com/things/articles/62a5ffb0c413f832f1a9f1de1a5c628a'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'http://scigraph.springernature.com/things/articles/62a5ffb0c413f832f1a9f1de1a5c628a'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'http://scigraph.springernature.com/things/articles/62a5ffb0c413f832f1a9f1de1a5c628a'






Preview window. Press ESC to close (or click here)


...