Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2016-09-07

AUTHORS

Saeed Reza Kheradpisheh, Masoud Ghodrati, Mohammad Ganjtabesh, Timothée Masquelier

ABSTRACT

Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. More... »

PAGES

32672

References to SciGraph publications

  • 2013-12-28. Detecting meaning in RSVP at 13 ms per picture in ATTENTION, PERCEPTION, & PSYCHOPHYSICS
  • 2015-05-27. Deep learning in NATURE
  • 1996-06. Speed of processing in the human visual system in NATURE
  • 2014. Visualizing and Understanding Convolutional Networks in COMPUTER VISION – ECCV 2014
  • 1980-04. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position in BIOLOGICAL CYBERNETICS
  • 2014-12-19. Does object view influence the scene consistency effect? in ATTENTION, PERCEPTION, & PSYCHOPHYSICS
  • 2014-01-08. Computer science: The learning machines in NATURE
  • 2003-08-17. Faces and objects in macaque cerebral cortex in NATURE NEUROSCIENCE
  • 2013-04-18. Top-down influences on visual processing in NATURE REVIEWS NEUROSCIENCE
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1038/srep32672

    DOI

    http://dx.doi.org/10.1038/srep32672

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1037659705

    PUBMED

    https://www.ncbi.nlm.nih.gov/pubmed/27601096


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology and Cognitive Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Humans", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Nerve Net", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Vision, Ocular", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Visual Perception", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "CERCO UMR 5549, CNRS \u2013 Universit\u00e9 de Toulouse, F-31300, France", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran", 
                "CERCO UMR 5549, CNRS \u2013 Universit\u00e9 de Toulouse, F-31300, France"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kheradpisheh", 
            "givenName": "Saeed Reza", 
            "id": "sg:person.01030064000.31", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01030064000.31"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Neuroscience Program, Biomedicine Discovery Institute, Monash University", 
              "id": "http://www.grid.ac/institutes/grid.1002.3", 
              "name": [
                "Department of Physiology, Monash University, Clayton, Australia 3800", 
                "Neuroscience Program, Biomedicine Discovery Institute, Monash University"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ghodrati", 
            "givenName": "Masoud", 
            "id": "sg:person.01344206233.34", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01344206233.34"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran", 
              "id": "http://www.grid.ac/institutes/grid.46072.37", 
              "name": [
                "Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ganjtabesh", 
            "givenName": "Mohammad", 
            "id": "sg:person.01110354353.58", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01110354353.58"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "CNRS, UMR-7210, Paris, F-75012, France", 
              "id": "http://www.grid.ac/institutes/grid.4444.0", 
              "name": [
                "CERCO UMR 5549, CNRS \u2013 Universit\u00e9 de Toulouse, F-31300, France", 
                "INSERM, U968, Paris, F-75012, France", 
                "Sorbonne Universit\u00e9s, UPMC Univ Paris 06, UMR-S 968, Institut de la Vision, Paris, F-75012, France", 
                "CNRS, UMR-7210, Paris, F-75012, France"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Masquelier", 
            "givenName": "Timoth\u00e9e", 
            "id": "sg:person.01271016666.11", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01271016666.11"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1038/nrn3476", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1019012794", 
              "https://doi.org/10.1038/nrn3476"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00344251", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016635886", 
              "https://doi.org/10.1007/bf00344251"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10590-1_53", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032233097", 
              "https://doi.org/10.1007/978-3-319-10590-1_53"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nature14539", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1010020120", 
              "https://doi.org/10.1038/nature14539"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/381520a0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1018357603", 
              "https://doi.org/10.1038/381520a0"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/s13414-013-0605-z", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1031409474", 
              "https://doi.org/10.3758/s13414-013-0605-z"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/505146a", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004459263", 
              "https://doi.org/10.1038/505146a"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.3758/s13414-014-0817-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1036319000", 
              "https://doi.org/10.3758/s13414-014-0817-x"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nn1111", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040117063", 
              "https://doi.org/10.1038/nn1111"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2016-09-07", 
        "datePublishedReg": "2016-09-07", 
        "description": "Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.", 
        "genre": "article", 
        "id": "sg:pub.10.1038/srep32672", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1045337", 
            "issn": [
              "2045-2322"
            ], 
            "name": "Scientific Reports", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "6"
          }
        ], 
        "keywords": [
          "deep convolutional neural network", 
          "human performance", 
          "view-invariant object recognition", 
          "object recognition", 
          "invariant object recognition", 
          "deep nets", 
          "human-like representation", 
          "viewpoint variations", 
          "human visual system", 
          "art deep convolutional neural networks", 
          "convolutional neural network", 
          "natural image database", 
          "object categories", 
          "hierarchy of layers", 
          "backward masking", 
          "HMAX model", 
          "visual system", 
          "human behavior", 
          "high variation levels", 
          "image database", 
          "deep network", 
          "shallow models", 
          "neural network", 
          "similar representations", 
          "shallow nets", 
          "task", 
          "similar errors", 
          "representation", 
          "network", 
          "receptive fields", 
          "recognition", 
          "nets", 
          "humans", 
          "more layers", 
          "architecture", 
          "masking", 
          "performance", 
          "attention", 
          "error distribution", 
          "vision", 
          "database", 
          "behavior", 
          "hierarchy", 
          "thousands", 
          "model", 
          "categories", 
          "answers", 
          "system", 
          "error", 
          "features", 
          "issues", 
          "variation levels", 
          "study", 
          "layer", 
          "results", 
          "levels", 
          "field", 
          "magnitude", 
          "state", 
          "large variation", 
          "variation", 
          "distribution", 
          "abstracted features", 
          "baseline shallow model", 
          "previous DCNN studies", 
          "DCNN studies", 
          "Human Feed-forward Vision", 
          "Feed-forward Vision"
        ], 
        "name": "Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition", 
        "pagination": "32672", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1037659705"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1038/srep32672"
            ]
          }, 
          {
            "name": "pubmed_id", 
            "type": "PropertyValue", 
            "value": [
              "27601096"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1038/srep32672", 
          "https://app.dimensions.ai/details/publication/pub.1037659705"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T18:43", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_719.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1038/srep32672"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1038/srep32672'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1038/srep32672'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1038/srep32672'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1038/srep32672'


     

    This table displays all metadata directly associated to this object as RDF triples.

    216 TRIPLES      22 PREDICATES      107 URIs      90 LITERALS      11 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1038/srep32672 schema:about N11a7e3f6f4ee493ea72bf5a6b5b16d1c
    2 Nb51ce2c5d26b459a9bd1e17d29e9ce01
    3 Ndad74973d0c641869f7e73410c110fa6
    4 Nfc2d729aa1e54df6b1f5eb52c2473530
    5 anzsrc-for:17
    6 anzsrc-for:1701
    7 schema:author N246e6753f7c54200aa7e5cb990e42236
    8 schema:citation sg:pub.10.1007/978-3-319-10590-1_53
    9 sg:pub.10.1007/bf00344251
    10 sg:pub.10.1038/381520a0
    11 sg:pub.10.1038/505146a
    12 sg:pub.10.1038/nature14539
    13 sg:pub.10.1038/nn1111
    14 sg:pub.10.1038/nrn3476
    15 sg:pub.10.3758/s13414-013-0605-z
    16 sg:pub.10.3758/s13414-014-0817-x
    17 schema:datePublished 2016-09-07
    18 schema:datePublishedReg 2016-09-07
    19 schema:description Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.
    20 schema:genre article
    21 schema:inLanguage en
    22 schema:isAccessibleForFree true
    23 schema:isPartOf N5f7a224d0af54dd3b406530c0da9840e
    24 Nb90b16d962a94c91aa87a03b95c5aec1
    25 sg:journal.1045337
    26 schema:keywords DCNN studies
    27 Feed-forward Vision
    28 HMAX model
    29 Human Feed-forward Vision
    30 abstracted features
    31 answers
    32 architecture
    33 art deep convolutional neural networks
    34 attention
    35 backward masking
    36 baseline shallow model
    37 behavior
    38 categories
    39 convolutional neural network
    40 database
    41 deep convolutional neural network
    42 deep nets
    43 deep network
    44 distribution
    45 error
    46 error distribution
    47 features
    48 field
    49 hierarchy
    50 hierarchy of layers
    51 high variation levels
    52 human behavior
    53 human performance
    54 human visual system
    55 human-like representation
    56 humans
    57 image database
    58 invariant object recognition
    59 issues
    60 large variation
    61 layer
    62 levels
    63 magnitude
    64 masking
    65 model
    66 more layers
    67 natural image database
    68 nets
    69 network
    70 neural network
    71 object categories
    72 object recognition
    73 performance
    74 previous DCNN studies
    75 receptive fields
    76 recognition
    77 representation
    78 results
    79 shallow models
    80 shallow nets
    81 similar errors
    82 similar representations
    83 state
    84 study
    85 system
    86 task
    87 thousands
    88 variation
    89 variation levels
    90 view-invariant object recognition
    91 viewpoint variations
    92 vision
    93 visual system
    94 schema:name Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition
    95 schema:pagination 32672
    96 schema:productId N163976aca34341dcaccbe4afd20991d3
    97 N6a6c12f799bd4fa597b23b0792e213bb
    98 Nc84a9dd9ec564ab6863bda8d8ba84e34
    99 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037659705
    100 https://doi.org/10.1038/srep32672
    101 schema:sdDatePublished 2022-01-01T18:43
    102 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    103 schema:sdPublisher N5cf007d7a8b64182ac7c4aad856d7753
    104 schema:url https://doi.org/10.1038/srep32672
    105 sgo:license sg:explorer/license/
    106 sgo:sdDataset articles
    107 rdf:type schema:ScholarlyArticle
    108 N11a7e3f6f4ee493ea72bf5a6b5b16d1c schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    109 schema:name Vision, Ocular
    110 rdf:type schema:DefinedTerm
    111 N163976aca34341dcaccbe4afd20991d3 schema:name dimensions_id
    112 schema:value pub.1037659705
    113 rdf:type schema:PropertyValue
    114 N246e6753f7c54200aa7e5cb990e42236 rdf:first sg:person.01030064000.31
    115 rdf:rest N79c6a9e1d8c74aec8bb9228b19820c42
    116 N2661d49be3a34ed4bad45cab8e4e2200 rdf:first sg:person.01271016666.11
    117 rdf:rest rdf:nil
    118 N5cf007d7a8b64182ac7c4aad856d7753 schema:name Springer Nature - SN SciGraph project
    119 rdf:type schema:Organization
    120 N5f7a224d0af54dd3b406530c0da9840e schema:volumeNumber 6
    121 rdf:type schema:PublicationVolume
    122 N6a6c12f799bd4fa597b23b0792e213bb schema:name doi
    123 schema:value 10.1038/srep32672
    124 rdf:type schema:PropertyValue
    125 N79c6a9e1d8c74aec8bb9228b19820c42 rdf:first sg:person.01344206233.34
    126 rdf:rest Nbd89c303847245cb870f9fb7d9ffd901
    127 Nb51ce2c5d26b459a9bd1e17d29e9ce01 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    128 schema:name Nerve Net
    129 rdf:type schema:DefinedTerm
    130 Nb90b16d962a94c91aa87a03b95c5aec1 schema:issueNumber 1
    131 rdf:type schema:PublicationIssue
    132 Nbd89c303847245cb870f9fb7d9ffd901 rdf:first sg:person.01110354353.58
    133 rdf:rest N2661d49be3a34ed4bad45cab8e4e2200
    134 Nc84a9dd9ec564ab6863bda8d8ba84e34 schema:name pubmed_id
    135 schema:value 27601096
    136 rdf:type schema:PropertyValue
    137 Ndad74973d0c641869f7e73410c110fa6 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    138 schema:name Humans
    139 rdf:type schema:DefinedTerm
    140 Nfc2d729aa1e54df6b1f5eb52c2473530 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    141 schema:name Visual Perception
    142 rdf:type schema:DefinedTerm
    143 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
    144 schema:name Psychology and Cognitive Sciences
    145 rdf:type schema:DefinedTerm
    146 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
    147 schema:name Psychology
    148 rdf:type schema:DefinedTerm
    149 sg:journal.1045337 schema:issn 2045-2322
    150 schema:name Scientific Reports
    151 schema:publisher Springer Nature
    152 rdf:type schema:Periodical
    153 sg:person.01030064000.31 schema:affiliation grid-institutes:None
    154 schema:familyName Kheradpisheh
    155 schema:givenName Saeed Reza
    156 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01030064000.31
    157 rdf:type schema:Person
    158 sg:person.01110354353.58 schema:affiliation grid-institutes:grid.46072.37
    159 schema:familyName Ganjtabesh
    160 schema:givenName Mohammad
    161 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01110354353.58
    162 rdf:type schema:Person
    163 sg:person.01271016666.11 schema:affiliation grid-institutes:grid.4444.0
    164 schema:familyName Masquelier
    165 schema:givenName Timothée
    166 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01271016666.11
    167 rdf:type schema:Person
    168 sg:person.01344206233.34 schema:affiliation grid-institutes:grid.1002.3
    169 schema:familyName Ghodrati
    170 schema:givenName Masoud
    171 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01344206233.34
    172 rdf:type schema:Person
    173 sg:pub.10.1007/978-3-319-10590-1_53 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032233097
    174 https://doi.org/10.1007/978-3-319-10590-1_53
    175 rdf:type schema:CreativeWork
    176 sg:pub.10.1007/bf00344251 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016635886
    177 https://doi.org/10.1007/bf00344251
    178 rdf:type schema:CreativeWork
    179 sg:pub.10.1038/381520a0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018357603
    180 https://doi.org/10.1038/381520a0
    181 rdf:type schema:CreativeWork
    182 sg:pub.10.1038/505146a schema:sameAs https://app.dimensions.ai/details/publication/pub.1004459263
    183 https://doi.org/10.1038/505146a
    184 rdf:type schema:CreativeWork
    185 sg:pub.10.1038/nature14539 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010020120
    186 https://doi.org/10.1038/nature14539
    187 rdf:type schema:CreativeWork
    188 sg:pub.10.1038/nn1111 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040117063
    189 https://doi.org/10.1038/nn1111
    190 rdf:type schema:CreativeWork
    191 sg:pub.10.1038/nrn3476 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019012794
    192 https://doi.org/10.1038/nrn3476
    193 rdf:type schema:CreativeWork
    194 sg:pub.10.3758/s13414-013-0605-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1031409474
    195 https://doi.org/10.3758/s13414-013-0605-z
    196 rdf:type schema:CreativeWork
    197 sg:pub.10.3758/s13414-014-0817-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1036319000
    198 https://doi.org/10.3758/s13414-014-0817-x
    199 rdf:type schema:CreativeWork
    200 grid-institutes:None schema:alternateName CERCO UMR 5549, CNRS – Université de Toulouse, F-31300, France
    201 schema:name CERCO UMR 5549, CNRS – Université de Toulouse, F-31300, France
    202 Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
    203 rdf:type schema:Organization
    204 grid-institutes:grid.1002.3 schema:alternateName Neuroscience Program, Biomedicine Discovery Institute, Monash University
    205 schema:name Department of Physiology, Monash University, Clayton, Australia 3800
    206 Neuroscience Program, Biomedicine Discovery Institute, Monash University
    207 rdf:type schema:Organization
    208 grid-institutes:grid.4444.0 schema:alternateName CNRS, UMR-7210, Paris, F-75012, France
    209 schema:name CERCO UMR 5549, CNRS – Université de Toulouse, F-31300, France
    210 CNRS, UMR-7210, Paris, F-75012, France
    211 INSERM, U968, Paris, F-75012, France
    212 Sorbonne Universités, UPMC Univ Paris 06, UMR-S 968, Institut de la Vision, Paris, F-75012, France
    213 rdf:type schema:Organization
    214 grid-institutes:grid.46072.37 schema:alternateName Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
    215 schema:name Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
    216 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...