Learning representations by back-propagating errors View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

1986-10

AUTHORS

David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams

ABSTRACT

We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1. More... »

PAGES

533-536

Journal

TITLE

Nature

ISSUE

6088

VOLUME

323

Related Patents

  • Computer Assisted Methods For Diagnosing Diseases
  • Classifying Data With Deep Learning Neural Records Incrementally Refined Through Expert Input
  • Systems And Apparatus For Implementing Task-Specific Learning Using Spiking Neurons
  • Method For Object Detection In Digital Image And Video Using Spiking Neural Networks
  • Self-Adjusting Threshold For Synaptic Activity In Neural Networks
  • Methods And Systems For Detecting Malicious Webpages
  • Deep Virtual Contrast
  • Predicting Likelihoods Of Conditions Being Satisfied Using Recurrent Neural Networks
  • Deep-Learning Based Functional Correlation Of Volumetric Designs
  • Salivary Biomarkers For Breast Cancer
  • Apparatus And Method For Partial Evaluation Of Synaptic Updates Based On System Events
  • Neutral Network With Plural Weight Calculation Methods And Variation Of Plural Learning Parameters
  • Machine Learning Prediction Of Virtual Computing Instance Transfer Performance
  • System And Method For Multi-Architecture Computed Tomography Pipeline
  • Deep Adversarial Artifact Removal
  • Apparatus For Classifying Data Using Boost Pooling Neural Network, And Neural Network Training Method Therefor
  • Noise And Signal Management For Rpu Array
  • Computing System For Training Neural Networks
  • Analyzing Health Events Using Recurrent Neural Networks
  • Image Processing Arrangements
  • Method And Device For Detecting Defects Of At Least One Rotary Wing Aircraft Rotor
  • System And Method For Ai Controlling Waste-Water Treatment By Neural Network And Back-Propagation Algorithm
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • Voice Frequency Analysis System, Voice Frequency Analysis Method, And Voice Recognition System And Voice Recognition Method Using The Same
  • For Generating Control Signals To Govern Operation Of A Gas Turbine
  • Method For Neuromorphic Implementation Of Convolutional Neural Networks
  • Hierarchical Clustering Method And Apparatus For A Cognitive Recognition System Based On A Combination Of Temporal And Prefrontal Cortex Models
  • Learning Student Dnn Via Output Distribution
  • Salivary Biomarker For Cancer, Method And Device For Assaying Same, And Method For Determining Salivary Biomarker For Cancer
  • Generating Author Vectors
  • Method And Apparatus For Functional Magnetic Resonance Imaging
  • Identifying Or Measuring Selected Substances Or Toxins In A Subject Using Resonant Raman Signals
  • Modulated Stochasticity Spiking Neuron Network Controller Apparatus And Methods
  • Recognition And Judgement Apparatus Having Various Learning Functions
  • Method And Computer Differentiating Correlation Patterns In Functional Magnetic Resonance Imaging
  • Predicting Likelihoods Of Conditions Being Satisfied Using Recurrent Neural Networks
  • System And Method For Task-Less Mapping Of Brain Activity
  • Stochastic Artifical Neuron With Multilayer Training Capability
  • Predicting Likelihoods Of Conditions Being Satisfied Using Recurrent Neural Networks
  • Adaptive Critic Apparatus And Methods
  • Method For Determining Attributes Using Neural Network And Fuzzy Logic
  • Multi-Spectral Flame Detector With Radiant Energy Estimation
  • System And Method For Addressing Overfitting In A Neural Network
  • Apparatus And Methods For Gating Analog And Spiking Signals In Artificial Neural Networks
  • Method And Apparatus For Adjusting Read-Out Conditions And/Or Image Processing Conditions For Radiation Images, Radiation Image Read-Out Apparatus, And Radiation Image Analyzing Method And Apparatus
  • Degree-Of-Stain Judging Device And Degree-Of-Stain Judging Method
  • Report Formatting For Automated Or Assisted Analysis Of Medical Imaging Data And Medical Diagnosis
  • Deep Virtual Contrast
  • Ultrasonic Gas Leak Detector With False Alarm Discrimination
  • Optimized Recommendation Engine
  • Systems And Methods For Callable Options Values Determination Using Deep Machine Learning
  • Flame Detection System
  • System And Method For Addressing Overfitting In A Neural Network
  • Ultrasonic Gas Leak Detector With False Alarm Discrimination
  • Stochastic Apparatus And Methods For Implementing Generalized Learning Rules
  • Computerized Cluster Analysis Framework For Decorrelated Cluster Identification In Datasets
  • Method And Apparatus For Adjusting Read-Out Conditions And/Or Image
  • Image-Processing Method
  • Neural Network Processing System Using Semiconductor Memories
  • Autonomous And Continuously Self-Improving Learning System
  • Proportional-Integral-Derivative Controller Effecting Expansion Kernels Comprising A Plurality Of Spiking Neurons Associated With A Plurality Of Receptive Fields
  • Multi Modality Brain Mapping System (Mbms) Using Artificial Intelligence And Pattern Recognition
  • In A Computer
  • Deep-Learning Based Functional Correlation Of Volumetric Designs
  • Number Of Clusters Estimation
  • Identifying Predictive Health Events In Temporal Sequences Using Recurrent Neural Network
  • Generating Representations Of Input Sequences Using Neural Networks
  • Monitoring, Simulation And Control Of Bioprocesses
  • Learning Pronunciations From Acoustic Sequences
  • Apparatus And Methods For Reinforcement-Guided Supervised Learning
  • Assessing Blood Brain Barrier Dynamics Or Identifying Or Measuring Selected Substances, Including Ethanol Or Toxins, In A Subject By Analyzing Raman Spectrum Signals
  • System And Method For Task-Less Mapping Of Brain Activity
  • Apparatus And Methods For Generalized State-Dependent Learning In Spiking Neuron Networks
  • Mold Temperature Abnormality Indicator Detection Device And Storage Medium
  • Neural Network Processing System Using Semiconductor Memories
  • Image Pattern Recognition Device And Recording Medium
  • Neural Network Processing System Using Semiconductor Memories And Processing Paired Data In Parallel
  • Face Identification Method And System Using Thereof
  • Extrapolating Empirical Models For Control, Prediction, And Optimization Applications
  • Degree-Of-Stain Judging Device And Degree-Of-Stain Judging Method
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • Semiconductor Integrated Circuit Device Comprising A Memory Array And A Processing Circuit
  • Generation Of High Dynamic Range Visual Media
  • Information Management Apparatus Dealing With Waste And Waste Recycle Planning Supporting Apparatus
  • Vehicle System For Identifying And Locating Non-Automobile Users Using Sounds
  • Methods And Systems For Data Traffic Analysis
  • Enhanced Fraud Detection With Terminal Transaction-Sequence Processing
  • Salivary Biomarkers For Oral Cancer
  • Information Processing Apparatus And Non-Transitory Computer Readable Medium Storing Information Processing Program For Estimating Image Represented By Captured Landscape
  • Composite Field Based Single Shot Prediction
  • Multi Optically-Coupled Channels Module And Related Methods Of Computation
  • Score Based Decisioning
  • Method For Training A Convolutional Recurrent Neural Network And For Semantic Segmentation Of Inputted Video Using The Trained Convolutional Recurrent Neural Network
  • Spiking Neuron Classifier Apparatus And Methods Using Conditionally Independent Subsets
  • Apparatus And Methods For State-Dependent Learning In Spiking Neuron Networks
  • Neural Net System For Analyzing Chromatographic Peaks
  • Systems And Methods For Learning And Predicting Events
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • Multi Optically-Coupled Channels Module And Related Methods Of Computation
  • Deep Neural Network Training With Native Devices
  • Systems And Methods For Learning And Predicting Events
  • System And Method For Spectral Computed Tomography Using Single Polychromatic X-Ray Spectrum Acquisition
  • Feature-Preserving Noise Removal
  • Recurrent Neural Networks With Rectified Linear Units
  • Multi Optically-Coupled Channels Module And Related Methods Of Computation
  • Text Extraction, In Particular Table Extraction From Electronic Documents
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • Online Domain Adaptation For Multi-Object Tracking
  • Deep Neural Network Training With Native Devices
  • Methods And Systems For Malware Detection
  • Spiking Neuron Network Adaptive Control Apparatus And Methods
  • Apparatus And Methods For State-Dependent Learning In Spiking Neuron Networks
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • System And Method For Detecting Fraudulent Transactions
  • Systems And Methods For Generating A Summary Of A Multi-Speaker Conversation
  • System And Method For Task-Less Mapping Of Brain Activity
  • Information Recognition System And Control System Using Same
  • Pattern Recognition Neural Network
  • System And Method For Addressing Overfitting In A Neural Network
  • Grooming Instrument Configured To Monitor Hair Loss
  • Automated Loan Risk Assessment System And Method
  • Automated Loan Risk Assessment System And Method
  • Privacy-Aware In-Network Personalization System
  • Neural Network With Selective Error Reduction To Increase Learning Speed
  • Dynamically Reconfigurable Stochastic Learning Apparatus And Methods
  • Computer Assisted Methods For Diagnosing Diseases
  • Performance Of Artificial Neural Network Models In The Presence Of Instrumental Noise And Measurement Errors
  • Supervised Contrastive Learning With Multiple Positive Examples
  • Method And System For Learning Representations Of Network Flow Traffic
  • Single Layer Neural Network Circuit For Performing Linearly Separable And Non-Linearly Separable Logical Operations
  • Generating Author Vectors
  • Method For The Contactless Determination And Processing Of Sleep Movement Data
  • Degree-Of-Stain Judging Device And Degree-Of-Stain Judging Method
  • System And Method For Estimating Remaining Useful Life
  • Risk Determination And Management Using Predictive Modeling And Transaction Profiles For Individual Transacting Entities
  • Image-Processing Method
  • Generating Author Vectors
  • Methods And Systems For Malware Detection
  • Neural Network Processing System Using Semiconductor Memories
  • Optical Information Processing Apparatus Having A Neural Network For Inducing An Error Signal
  • Data Processing Circuits In A Neural Network For Processing First Data Stored In Local Register Simultaneous With Second Data From A Memory
  • System And Method For Efficient Evolution Of Deep Convolutional Neural Networks Using Filter-Wise Recombination And Propagated Mutations
  • Information Recognition System And Control System Using Same
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1038/323533a0

    DOI

    http://dx.doi.org/10.1038/323533a0

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1018367015


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Rumelhart", 
            "givenName": "David E.", 
            "id": "sg:person.011313517665.78", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011313517665.78"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA", 
              "id": "http://www.grid.ac/institutes/grid.147455.6", 
              "name": [
                "Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Hinton", 
            "givenName": "Geoffrey E.", 
            "id": "sg:person.0615147542.17", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0615147542.17"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Williams", 
            "givenName": "Ronald J.", 
            "id": "sg:person.016024123731.46", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016024123731.46"
            ], 
            "type": "Person"
          }
        ], 
        "datePublished": "1986-10", 
        "datePublishedReg": "1986-10-01", 
        "description": "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal \u2018hidden\u2019 units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.", 
        "genre": "article", 
        "id": "sg:pub.10.1038/323533a0", 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1018957", 
            "issn": [
              "0028-0836", 
              "1476-4687"
            ], 
            "name": "Nature", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "6088", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "323"
          }
        ], 
        "keywords": [
          "output vector", 
          "weight adjustment", 
          "useful new features", 
          "learning procedure", 
          "new features", 
          "important features", 
          "vector", 
          "simple method", 
          "network", 
          "regularity", 
          "error", 
          "representation", 
          "input", 
          "procedure", 
          "output", 
          "connection", 
          "features", 
          "nets", 
          "domain", 
          "results", 
          "units", 
          "interaction", 
          "task", 
          "new learning procedure", 
          "measures", 
          "part", 
          "weight", 
          "adjustment", 
          "task domain", 
          "ability", 
          "procedure1", 
          "differences", 
          "method"
        ], 
        "name": "Learning representations by back-propagating errors", 
        "pagination": "533-536", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1018367015"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1038/323533a0"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1038/323533a0", 
          "https://app.dimensions.ai/details/publication/pub.1018367015"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-09-02T15:46", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220902/entities/gbq_results/article/article_211.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1038/323533a0"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1038/323533a0'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1038/323533a0'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1038/323533a0'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1038/323533a0'


     

    This table displays all metadata directly associated to this object as RDF triples.

    107 TRIPLES      20 PREDICATES      58 URIs      50 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1038/323533a0 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N97430a6a6ed3484da1d32a9b08bde0de
    4 schema:datePublished 1986-10
    5 schema:datePublishedReg 1986-10-01
    6 schema:description We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
    7 schema:genre article
    8 schema:isAccessibleForFree false
    9 schema:isPartOf N11d1f9bf41de4acca596e47e4e976df8
    10 N451a28e029e04f058abc6735ee31e0cf
    11 sg:journal.1018957
    12 schema:keywords ability
    13 adjustment
    14 connection
    15 differences
    16 domain
    17 error
    18 features
    19 important features
    20 input
    21 interaction
    22 learning procedure
    23 measures
    24 method
    25 nets
    26 network
    27 new features
    28 new learning procedure
    29 output
    30 output vector
    31 part
    32 procedure
    33 procedure1
    34 regularity
    35 representation
    36 results
    37 simple method
    38 task
    39 task domain
    40 units
    41 useful new features
    42 vector
    43 weight
    44 weight adjustment
    45 schema:name Learning representations by back-propagating errors
    46 schema:pagination 533-536
    47 schema:productId N6ce67cc5e249423fb0391bb08eed284e
    48 Nea3a70b5646340beab9f344deb612024
    49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018367015
    50 https://doi.org/10.1038/323533a0
    51 schema:sdDatePublished 2022-09-02T15:46
    52 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    53 schema:sdPublisher N6348f8539bf8453c8aab3aea1d031de0
    54 schema:url https://doi.org/10.1038/323533a0
    55 sgo:license sg:explorer/license/
    56 sgo:sdDataset articles
    57 rdf:type schema:ScholarlyArticle
    58 N11d1f9bf41de4acca596e47e4e976df8 schema:issueNumber 6088
    59 rdf:type schema:PublicationIssue
    60 N451a28e029e04f058abc6735ee31e0cf schema:volumeNumber 323
    61 rdf:type schema:PublicationVolume
    62 N6348f8539bf8453c8aab3aea1d031de0 schema:name Springer Nature - SN SciGraph project
    63 rdf:type schema:Organization
    64 N6ce67cc5e249423fb0391bb08eed284e schema:name dimensions_id
    65 schema:value pub.1018367015
    66 rdf:type schema:PropertyValue
    67 N75f53127201f461590710974224c1baa rdf:first sg:person.0615147542.17
    68 rdf:rest N7908d9493a0448cfa66a62dec177e573
    69 N7908d9493a0448cfa66a62dec177e573 rdf:first sg:person.016024123731.46
    70 rdf:rest rdf:nil
    71 N97430a6a6ed3484da1d32a9b08bde0de rdf:first sg:person.011313517665.78
    72 rdf:rest N75f53127201f461590710974224c1baa
    73 Nea3a70b5646340beab9f344deb612024 schema:name doi
    74 schema:value 10.1038/323533a0
    75 rdf:type schema:PropertyValue
    76 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    77 schema:name Information and Computing Sciences
    78 rdf:type schema:DefinedTerm
    79 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    80 schema:name Artificial Intelligence and Image Processing
    81 rdf:type schema:DefinedTerm
    82 sg:journal.1018957 schema:issn 0028-0836
    83 1476-4687
    84 schema:name Nature
    85 schema:publisher Springer Nature
    86 rdf:type schema:Periodical
    87 sg:person.011313517665.78 schema:affiliation grid-institutes:None
    88 schema:familyName Rumelhart
    89 schema:givenName David E.
    90 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011313517665.78
    91 rdf:type schema:Person
    92 sg:person.016024123731.46 schema:affiliation grid-institutes:None
    93 schema:familyName Williams
    94 schema:givenName Ronald J.
    95 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016024123731.46
    96 rdf:type schema:Person
    97 sg:person.0615147542.17 schema:affiliation grid-institutes:grid.147455.6
    98 schema:familyName Hinton
    99 schema:givenName Geoffrey E.
    100 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0615147542.17
    101 rdf:type schema:Person
    102 grid-institutes:None schema:alternateName Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA
    103 schema:name Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA
    104 rdf:type schema:Organization
    105 grid-institutes:grid.147455.6 schema:alternateName Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA
    106 schema:name Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA
    107 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...