Learning representations by back-propagating errors View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

1986-10

AUTHORS

David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams

ABSTRACT

We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1. More... »

PAGES

533-536

Journal

TITLE

Nature

ISSUE

6088

VOLUME

323

Related Patents

  • Methods And Systems For Detecting Malicious Webpages
  • Deep-Learning Based Functional Correlation Of Volumetric Designs
  • Computer Assisted Methods For Diagnosing Diseases
  • Identifying Or Measuring Selected Substances Or Toxins In A Subject Using Resonant Raman Signals
  • Method For Object Detection In Digital Image And Video Using Spiking Neural Networks
  • Modulated Stochasticity Spiking Neuron Network Controller Apparatus And Methods
  • Apparatus And Method For Partial Evaluation Of Synaptic Updates Based On System Events
  • Salivary Biomarkers For Breast Cancer
  • Classifying Data With Deep Learning Neural Records Incrementally Refined Through Expert Input
  • Systems And Apparatus For Implementing Task-Specific Learning Using Spiking Neurons
  • Predicting Likelihoods Of Conditions Being Satisfied Using Recurrent Neural Networks
  • Method And Apparatus For Functional Magnetic Resonance Imaging
  • System And Method For Task-Less Mapping Of Brain Activity
  • Self-Adjusting Threshold For Synaptic Activity In Neural Networks
  • Analyzing Health Events Using Recurrent Neural Networks
  • Salivary Biomarker For Cancer, Method And Device For Assaying Same, And Method For Determining Salivary Biomarker For Cancer
  • Method For Neuromorphic Implementation Of Convolutional Neural Networks
  • Generating Author Vectors
  • System And Method For Ai Controlling Waste-Water Treatment By Neural Network And Back-Propagation Algorithm
  • Predicting Likelihoods Of Conditions Being Satisfied Using Recurrent Neural Networks
  • Hierarchical Clustering Method And Apparatus For A Cognitive Recognition System Based On A Combination Of Temporal And Prefrontal Cortex Models
  • Noise And Signal Management For Rpu Array
  • Method And Device For Detecting Defects Of At Least One Rotary Wing Aircraft Rotor
  • System And Method For Multi-Architecture Computed Tomography Pipeline
  • Neutral Network With Plural Weight Calculation Methods And Variation Of Plural Learning Parameters
  • Learning Student Dnn Via Output Distribution
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • For Generating Control Signals To Govern Operation Of A Gas Turbine
  • Machine Learning Prediction Of Virtual Computing Instance Transfer Performance
  • Recognition And Judgement Apparatus Having Various Learning Functions
  • Method And Computer Differentiating Correlation Patterns In Functional Magnetic Resonance Imaging
  • Voice Frequency Analysis System, Voice Frequency Analysis Method, And Voice Recognition System And Voice Recognition Method Using The Same
  • Image Processing Arrangements
  • Computing System For Training Neural Networks
  • Deep Adversarial Artifact Removal
  • Apparatus For Classifying Data Using Boost Pooling Neural Network, And Neural Network Training Method Therefor
  • Deep Virtual Contrast
  • Stochastic Artifical Neuron With Multilayer Training Capability
  • Predicting Likelihoods Of Conditions Being Satisfied Using Recurrent Neural Networks
  • Adaptive Critic Apparatus And Methods
  • Method For Determining Attributes Using Neural Network And Fuzzy Logic
  • Multi-Spectral Flame Detector With Radiant Energy Estimation
  • System And Method For Addressing Overfitting In A Neural Network
  • Apparatus And Methods For Gating Analog And Spiking Signals In Artificial Neural Networks
  • Method And Apparatus For Adjusting Read-Out Conditions And/Or Image Processing Conditions For Radiation Images, Radiation Image Read-Out Apparatus, And Radiation Image Analyzing Method And Apparatus
  • Degree-Of-Stain Judging Device And Degree-Of-Stain Judging Method
  • Report Formatting For Automated Or Assisted Analysis Of Medical Imaging Data And Medical Diagnosis
  • Deep Virtual Contrast
  • Ultrasonic Gas Leak Detector With False Alarm Discrimination
  • Optimized Recommendation Engine
  • Systems And Methods For Callable Options Values Determination Using Deep Machine Learning
  • Flame Detection System
  • System And Method For Addressing Overfitting In A Neural Network
  • Ultrasonic Gas Leak Detector With False Alarm Discrimination
  • Stochastic Apparatus And Methods For Implementing Generalized Learning Rules
  • Computerized Cluster Analysis Framework For Decorrelated Cluster Identification In Datasets
  • Method And Apparatus For Adjusting Read-Out Conditions And/Or Image
  • Image-Processing Method
  • Neural Network Processing System Using Semiconductor Memories
  • Autonomous And Continuously Self-Improving Learning System
  • Proportional-Integral-Derivative Controller Effecting Expansion Kernels Comprising A Plurality Of Spiking Neurons Associated With A Plurality Of Receptive Fields
  • Multi Modality Brain Mapping System (Mbms) Using Artificial Intelligence And Pattern Recognition
  • In A Computer
  • Deep-Learning Based Functional Correlation Of Volumetric Designs
  • Number Of Clusters Estimation
  • Identifying Predictive Health Events In Temporal Sequences Using Recurrent Neural Network
  • Generating Representations Of Input Sequences Using Neural Networks
  • Monitoring, Simulation And Control Of Bioprocesses
  • Learning Pronunciations From Acoustic Sequences
  • Apparatus And Methods For Reinforcement-Guided Supervised Learning
  • Assessing Blood Brain Barrier Dynamics Or Identifying Or Measuring Selected Substances, Including Ethanol Or Toxins, In A Subject By Analyzing Raman Spectrum Signals
  • System And Method For Task-Less Mapping Of Brain Activity
  • Apparatus And Methods For Generalized State-Dependent Learning In Spiking Neuron Networks
  • Mold Temperature Abnormality Indicator Detection Device And Storage Medium
  • Neural Network Processing System Using Semiconductor Memories
  • Image Pattern Recognition Device And Recording Medium
  • Neural Network Processing System Using Semiconductor Memories And Processing Paired Data In Parallel
  • Face Identification Method And System Using Thereof
  • Extrapolating Empirical Models For Control, Prediction, And Optimization Applications
  • Degree-Of-Stain Judging Device And Degree-Of-Stain Judging Method
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • Semiconductor Integrated Circuit Device Comprising A Memory Array And A Processing Circuit
  • Generation Of High Dynamic Range Visual Media
  • Information Management Apparatus Dealing With Waste And Waste Recycle Planning Supporting Apparatus
  • Vehicle System For Identifying And Locating Non-Automobile Users Using Sounds
  • Methods And Systems For Data Traffic Analysis
  • Enhanced Fraud Detection With Terminal Transaction-Sequence Processing
  • Salivary Biomarkers For Oral Cancer
  • Information Processing Apparatus And Non-Transitory Computer Readable Medium Storing Information Processing Program For Estimating Image Represented By Captured Landscape
  • Composite Field Based Single Shot Prediction
  • Multi Optically-Coupled Channels Module And Related Methods Of Computation
  • Score Based Decisioning
  • Method For Training A Convolutional Recurrent Neural Network And For Semantic Segmentation Of Inputted Video Using The Trained Convolutional Recurrent Neural Network
  • Generating Representations Of Input Sequences Using Neural Networks
  • Neural Net System For Analyzing Chromatographic Peaks
  • Nerve Equivalent Circuit, Synapse Equivalent Circuit And Nerve Cell Body Equivalent Circuit
  • Data Processing Device With An Artificial Neural Network And Method For Data Processing
  • Method And Apparatus For Indetification, Forecast, And Control Of A Non-Linear Flow On A Physical System Network Using A Neural Network
  • Spiking Neuron Classifier Apparatus And Methods Using Conditionally Independent Subsets
  • Quantum-Assisted Training Of Neural Networks
  • Computing Numeric Representations Of Words In A High-Dimensional Space
  • Quantum-Assisted Training Of Neural Networks
  • Data Processing Device With An Artificial Neural Network And Method For Data Processing
  • Method And Apparatus For Indetification, Forecast, And Control Of A Non-Linear Flow On A Physical System Network Using A Neural Network
  • Nerve Equivalent Circuit, Synapse Equivalent Circuit And Nerve Cell Body Equivalent Circuit
  • System And Method For Spectral Computed Tomography Using Single Polychromatic X-Ray Spectrum Acquisition
  • Text Extraction, In Particular Table Extraction From Electronic Documents
  • Methods And Systems For Malware Detection
  • Spiking Neuron Network Adaptive Control Apparatus And Methods
  • Feature-Preserving Noise Removal
  • Generating Representations Of Input Sequences Using Neural Networks
  • Recurrent Neural Networks With Rectified Linear Units
  • Online Domain Adaptation For Multi-Object Tracking
  • Method For The Contactless Determination And Processing Of Sleep Movement Data
  • Methods And Systems For Malware Detection
  • System And Method For Task-Less Mapping Of Brain Activity
  • System And Method For Estimating Remaining Useful Life
  • Image-Processing Method
  • Risk Determination And Management Using Predictive Modeling And Transaction Profiles For Individual Transacting Entities
  • Neural Network Processing System Using Semiconductor Memories
  • Method And System For Learning Representations Of Network Flow Traffic
  • System And Method For Efficient Evolution Of Deep Convolutional Neural Networks Using Filter-Wise Recombination And Propagated Mutations
  • Data Processing Circuits In A Neural Network For Processing First Data Stored In Local Register Simultaneous With Second Data From A Memory
  • Optical Information Processing Apparatus Having A Neural Network For Inducing An Error Signal
  • Information Recognition System And Control System Using Same
  • Neural Network With Selective Error Reduction To Increase Learning Speed
  • Pattern Recognition Neural Network
  • Generating Author Vectors
  • Information Recognition System And Control System Using Same
  • Single Layer Neural Network Circuit For Performing Linearly Separable And Non-Linearly Separable Logical Operations
  • Generating Author Vectors
  • Supervised Contrastive Learning With Multiple Positive Examples
  • Privacy-Aware In-Network Personalization System
  • Systems And Methods For Generating A Summary Of A Multi-Speaker Conversation
  • Computer Assisted Methods For Diagnosing Diseases
  • Performance Of Artificial Neural Network Models In The Presence Of Instrumental Noise And Measurement Errors
  • Automated Loan Risk Assessment System And Method
  • Degree-Of-Stain Judging Device And Degree-Of-Stain Judging Method
  • Grooming Instrument Configured To Monitor Hair Loss
  • System And Method For Addressing Overfitting In A Neural Network
  • System And Method For Detecting Fraudulent Transactions
  • Automated Loan Risk Assessment System And Method
  • Dynamically Reconfigurable Stochastic Learning Apparatus And Methods
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1038/323533a0

    DOI

    http://dx.doi.org/10.1038/323533a0

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1018367015


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Rumelhart", 
            "givenName": "David E.", 
            "id": "sg:person.011313517665.78", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011313517665.78"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA", 
              "id": "http://www.grid.ac/institutes/grid.147455.6", 
              "name": [
                "Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Hinton", 
            "givenName": "Geoffrey E.", 
            "id": "sg:person.0615147542.17", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0615147542.17"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Williams", 
            "givenName": "Ronald J.", 
            "id": "sg:person.016024123731.46", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016024123731.46"
            ], 
            "type": "Person"
          }
        ], 
        "datePublished": "1986-10", 
        "datePublishedReg": "1986-10-01", 
        "description": "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal \u2018hidden\u2019 units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.", 
        "genre": "article", 
        "id": "sg:pub.10.1038/323533a0", 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1018957", 
            "issn": [
              "0028-0836", 
              "1476-4687"
            ], 
            "name": "Nature", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "6088", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "323"
          }
        ], 
        "keywords": [
          "output vector", 
          "weight adjustment", 
          "useful new features", 
          "learning procedure", 
          "new features", 
          "important features", 
          "vector", 
          "simple method", 
          "network", 
          "regularity", 
          "error", 
          "representation", 
          "input", 
          "procedure", 
          "output", 
          "connection", 
          "features", 
          "nets", 
          "domain", 
          "results", 
          "units", 
          "interaction", 
          "task", 
          "new learning procedure", 
          "measures", 
          "part", 
          "weight", 
          "adjustment", 
          "task domain", 
          "ability", 
          "procedure1", 
          "differences", 
          "method"
        ], 
        "name": "Learning representations by back-propagating errors", 
        "pagination": "533-536", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1018367015"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1038/323533a0"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1038/323533a0", 
          "https://app.dimensions.ai/details/publication/pub.1018367015"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-09-02T15:46", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220902/entities/gbq_results/article/article_211.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1038/323533a0"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1038/323533a0'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1038/323533a0'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1038/323533a0'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1038/323533a0'


     

    This table displays all metadata directly associated to this object as RDF triples.

    107 TRIPLES      20 PREDICATES      58 URIs      50 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1038/323533a0 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nfe3b11ae83c144c3898cbb801ffb5c8f
    4 schema:datePublished 1986-10
    5 schema:datePublishedReg 1986-10-01
    6 schema:description We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
    7 schema:genre article
    8 schema:isAccessibleForFree false
    9 schema:isPartOf N40b979f0cb8a4e86a2edc2d767076b68
    10 Nd4238d1165014c3c90e48ee401c255c6
    11 sg:journal.1018957
    12 schema:keywords ability
    13 adjustment
    14 connection
    15 differences
    16 domain
    17 error
    18 features
    19 important features
    20 input
    21 interaction
    22 learning procedure
    23 measures
    24 method
    25 nets
    26 network
    27 new features
    28 new learning procedure
    29 output
    30 output vector
    31 part
    32 procedure
    33 procedure1
    34 regularity
    35 representation
    36 results
    37 simple method
    38 task
    39 task domain
    40 units
    41 useful new features
    42 vector
    43 weight
    44 weight adjustment
    45 schema:name Learning representations by back-propagating errors
    46 schema:pagination 533-536
    47 schema:productId N5d2c117e5bfb4b078c5d38549196c487
    48 N60982abe74e147dfba0002b6987d1d08
    49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018367015
    50 https://doi.org/10.1038/323533a0
    51 schema:sdDatePublished 2022-09-02T15:46
    52 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    53 schema:sdPublisher Na532b86cf9da4413828735ac5d78b8f0
    54 schema:url https://doi.org/10.1038/323533a0
    55 sgo:license sg:explorer/license/
    56 sgo:sdDataset articles
    57 rdf:type schema:ScholarlyArticle
    58 N40b979f0cb8a4e86a2edc2d767076b68 schema:issueNumber 6088
    59 rdf:type schema:PublicationIssue
    60 N563ac206ef284d9a9dce10540f44b871 rdf:first sg:person.0615147542.17
    61 rdf:rest N5f20ddebfac3469d822339e97b480e4d
    62 N5d2c117e5bfb4b078c5d38549196c487 schema:name dimensions_id
    63 schema:value pub.1018367015
    64 rdf:type schema:PropertyValue
    65 N5f20ddebfac3469d822339e97b480e4d rdf:first sg:person.016024123731.46
    66 rdf:rest rdf:nil
    67 N60982abe74e147dfba0002b6987d1d08 schema:name doi
    68 schema:value 10.1038/323533a0
    69 rdf:type schema:PropertyValue
    70 Na532b86cf9da4413828735ac5d78b8f0 schema:name Springer Nature - SN SciGraph project
    71 rdf:type schema:Organization
    72 Nd4238d1165014c3c90e48ee401c255c6 schema:volumeNumber 323
    73 rdf:type schema:PublicationVolume
    74 Nfe3b11ae83c144c3898cbb801ffb5c8f rdf:first sg:person.011313517665.78
    75 rdf:rest N563ac206ef284d9a9dce10540f44b871
    76 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    77 schema:name Information and Computing Sciences
    78 rdf:type schema:DefinedTerm
    79 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    80 schema:name Artificial Intelligence and Image Processing
    81 rdf:type schema:DefinedTerm
    82 sg:journal.1018957 schema:issn 0028-0836
    83 1476-4687
    84 schema:name Nature
    85 schema:publisher Springer Nature
    86 rdf:type schema:Periodical
    87 sg:person.011313517665.78 schema:affiliation grid-institutes:None
    88 schema:familyName Rumelhart
    89 schema:givenName David E.
    90 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011313517665.78
    91 rdf:type schema:Person
    92 sg:person.016024123731.46 schema:affiliation grid-institutes:None
    93 schema:familyName Williams
    94 schema:givenName Ronald J.
    95 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016024123731.46
    96 rdf:type schema:Person
    97 sg:person.0615147542.17 schema:affiliation grid-institutes:grid.147455.6
    98 schema:familyName Hinton
    99 schema:givenName Geoffrey E.
    100 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0615147542.17
    101 rdf:type schema:Person
    102 grid-institutes:None schema:alternateName Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA
    103 schema:name Institute for Cognitive Science, C-015, University of California, 92093, San Diego, La Jolla, California, USA
    104 rdf:type schema:Organization
    105 grid-institutes:grid.147455.6 schema:alternateName Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA
    106 schema:name Department of Computer Science, Carnegie-Mellon University, 15213, Pittsburgh, Philadelphia, USA
    107 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...