Topological measurement of deep neural networks using persistent homology View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2021-07-03

AUTHORS

Satoru Watanabe, Hayato Yamana

ABSTRACT

The inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs. More... »

PAGES

1-18

References to SciGraph publications

  • 2014-05-14. The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes in ALGORITHMICA
  • 2015-05-27. Deep learning in NATURE
  • 2016-03-31. The topology of the directed clique complex as a network invariant in SPRINGERPLUS
  • 2017-08-09. A roadmap for the computation of persistent homology in EPJ DATA SCIENCE
  • 2014. javaPlex: A Research Software Package for Persistent (Co)Homology in MATHEMATICAL SOFTWARE – ICMS 2014
  • 2014. Visualizing and Understanding Convolutional Networks in COMPUTER VISION – ECCV 2014
  • 2017-11-16. Cliques and cavities in the human connectome in JOURNAL OF COMPUTATIONAL NEUROSCIENCE
  • 2014-10-11. A topological measurement of protein compressibility in JAPAN JOURNAL OF INDUSTRIAL AND APPLIED MATHEMATICS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s10472-021-09761-3

    DOI

    http://dx.doi.org/10.1007/s10472-021-09761-3

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1139354834


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan", 
              "id": "http://www.grid.ac/institutes/grid.5290.e", 
              "name": [
                "Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Watanabe", 
            "givenName": "Satoru", 
            "id": "sg:person.013353716133.93", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013353716133.93"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan", 
              "id": "http://www.grid.ac/institutes/grid.5290.e", 
              "name": [
                "Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yamana", 
            "givenName": "Hayato", 
            "id": "sg:person.012365541005.42", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012365541005.42"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-10590-1_53", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032233097", 
              "https://doi.org/10.1007/978-3-319-10590-1_53"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s13160-014-0153-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1024641185", 
              "https://doi.org/10.1007/s13160-014-0153-5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s40064-016-2022-y", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017028106", 
              "https://doi.org/10.1186/s40064-016-2022-y"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-662-44199-2_23", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1019497967", 
              "https://doi.org/10.1007/978-3-662-44199-2_23"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00453-014-9887-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1015428206", 
              "https://doi.org/10.1007/s00453-014-9887-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nature14539", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1010020120", 
              "https://doi.org/10.1038/nature14539"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10827-017-0672-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1092711878", 
              "https://doi.org/10.1007/s10827-017-0672-6"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1140/epjds/s13688-017-0109-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091132845", 
              "https://doi.org/10.1140/epjds/s13688-017-0109-5"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-07-03", 
        "datePublishedReg": "2021-07-03", 
        "description": "The inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s10472-021-09761-3", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.9023643", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1043955", 
            "issn": [
              "1012-2443", 
              "1573-7470"
            ], 
            "name": "Annals of Mathematics and Artificial Intelligence", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }
        ], 
        "keywords": [
          "deep neural networks", 
          "convolutional neural network", 
          "topological data analysis", 
          "neural network", 
          "persistent homology", 
          "CIFAR-10 data sets", 
          "DNN model", 
          "inner representation", 
          "training process", 
          "evaluation results", 
          "different resolutions", 
          "network", 
          "data sets", 
          "problem difficulty", 
          "novel approach", 
          "prominent methods", 
          "outstanding method", 
          "clique complexes", 
          "representation", 
          "data analysis", 
          "topological measurements", 
          "MNIST", 
          "FCN", 
          "pH one", 
          "multiple neurons", 
          "complexity", 
          "set", 
          "method", 
          "output", 
          "model", 
          "difficulties", 
          "evaluation", 
          "one", 
          "process", 
          "results", 
          "resolution", 
          "analysis", 
          "neurons", 
          "measurements", 
          "combinational effect", 
          "excess of neurons", 
          "effect", 
          "homology", 
          "excess", 
          "complexes", 
          "approach", 
          "paper", 
          "one-dimensional PH", 
          "PH of DNNs"
        ], 
        "name": "Topological measurement of deep neural networks using persistent homology", 
        "pagination": "1-18", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1139354834"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s10472-021-09761-3"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s10472-021-09761-3", 
          "https://app.dimensions.ai/details/publication/pub.1139354834"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T19:01", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_895.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s10472-021-09761-3"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s10472-021-09761-3'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s10472-021-09761-3'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s10472-021-09761-3'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s10472-021-09761-3'


     

    This table displays all metadata directly associated to this object as RDF triples.

    142 TRIPLES      22 PREDICATES      80 URIs      64 LITERALS      4 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s10472-021-09761-3 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N67b923c44d28436193fed696f956f0bc
    4 schema:citation sg:pub.10.1007/978-3-319-10590-1_53
    5 sg:pub.10.1007/978-3-662-44199-2_23
    6 sg:pub.10.1007/s00453-014-9887-3
    7 sg:pub.10.1007/s10827-017-0672-6
    8 sg:pub.10.1007/s13160-014-0153-5
    9 sg:pub.10.1038/nature14539
    10 sg:pub.10.1140/epjds/s13688-017-0109-5
    11 sg:pub.10.1186/s40064-016-2022-y
    12 schema:datePublished 2021-07-03
    13 schema:datePublishedReg 2021-07-03
    14 schema:description The inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs.
    15 schema:genre article
    16 schema:inLanguage en
    17 schema:isAccessibleForFree true
    18 schema:isPartOf sg:journal.1043955
    19 schema:keywords CIFAR-10 data sets
    20 DNN model
    21 FCN
    22 MNIST
    23 PH of DNNs
    24 analysis
    25 approach
    26 clique complexes
    27 combinational effect
    28 complexes
    29 complexity
    30 convolutional neural network
    31 data analysis
    32 data sets
    33 deep neural networks
    34 different resolutions
    35 difficulties
    36 effect
    37 evaluation
    38 evaluation results
    39 excess
    40 excess of neurons
    41 homology
    42 inner representation
    43 measurements
    44 method
    45 model
    46 multiple neurons
    47 network
    48 neural network
    49 neurons
    50 novel approach
    51 one
    52 one-dimensional PH
    53 output
    54 outstanding method
    55 pH one
    56 paper
    57 persistent homology
    58 problem difficulty
    59 process
    60 prominent methods
    61 representation
    62 resolution
    63 results
    64 set
    65 topological data analysis
    66 topological measurements
    67 training process
    68 schema:name Topological measurement of deep neural networks using persistent homology
    69 schema:pagination 1-18
    70 schema:productId N1673ef11bc884bc0a110fefe6667e4ce
    71 N799df5f2a32f406eb6c4c6b57d6d3841
    72 schema:sameAs https://app.dimensions.ai/details/publication/pub.1139354834
    73 https://doi.org/10.1007/s10472-021-09761-3
    74 schema:sdDatePublished 2022-01-01T19:01
    75 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    76 schema:sdPublisher N1dff19596db04b208106ecc8fafd4e09
    77 schema:url https://doi.org/10.1007/s10472-021-09761-3
    78 sgo:license sg:explorer/license/
    79 sgo:sdDataset articles
    80 rdf:type schema:ScholarlyArticle
    81 N1673ef11bc884bc0a110fefe6667e4ce schema:name doi
    82 schema:value 10.1007/s10472-021-09761-3
    83 rdf:type schema:PropertyValue
    84 N1dff19596db04b208106ecc8fafd4e09 schema:name Springer Nature - SN SciGraph project
    85 rdf:type schema:Organization
    86 N66241c13bf994c10a0062c82b1f9a4ad rdf:first sg:person.012365541005.42
    87 rdf:rest rdf:nil
    88 N67b923c44d28436193fed696f956f0bc rdf:first sg:person.013353716133.93
    89 rdf:rest N66241c13bf994c10a0062c82b1f9a4ad
    90 N799df5f2a32f406eb6c4c6b57d6d3841 schema:name dimensions_id
    91 schema:value pub.1139354834
    92 rdf:type schema:PropertyValue
    93 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    94 schema:name Information and Computing Sciences
    95 rdf:type schema:DefinedTerm
    96 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    97 schema:name Artificial Intelligence and Image Processing
    98 rdf:type schema:DefinedTerm
    99 sg:grant.9023643 http://pending.schema.org/fundedItem sg:pub.10.1007/s10472-021-09761-3
    100 rdf:type schema:MonetaryGrant
    101 sg:journal.1043955 schema:issn 1012-2443
    102 1573-7470
    103 schema:name Annals of Mathematics and Artificial Intelligence
    104 schema:publisher Springer Nature
    105 rdf:type schema:Periodical
    106 sg:person.012365541005.42 schema:affiliation grid-institutes:grid.5290.e
    107 schema:familyName Yamana
    108 schema:givenName Hayato
    109 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012365541005.42
    110 rdf:type schema:Person
    111 sg:person.013353716133.93 schema:affiliation grid-institutes:grid.5290.e
    112 schema:familyName Watanabe
    113 schema:givenName Satoru
    114 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013353716133.93
    115 rdf:type schema:Person
    116 sg:pub.10.1007/978-3-319-10590-1_53 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032233097
    117 https://doi.org/10.1007/978-3-319-10590-1_53
    118 rdf:type schema:CreativeWork
    119 sg:pub.10.1007/978-3-662-44199-2_23 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019497967
    120 https://doi.org/10.1007/978-3-662-44199-2_23
    121 rdf:type schema:CreativeWork
    122 sg:pub.10.1007/s00453-014-9887-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015428206
    123 https://doi.org/10.1007/s00453-014-9887-3
    124 rdf:type schema:CreativeWork
    125 sg:pub.10.1007/s10827-017-0672-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1092711878
    126 https://doi.org/10.1007/s10827-017-0672-6
    127 rdf:type schema:CreativeWork
    128 sg:pub.10.1007/s13160-014-0153-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024641185
    129 https://doi.org/10.1007/s13160-014-0153-5
    130 rdf:type schema:CreativeWork
    131 sg:pub.10.1038/nature14539 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010020120
    132 https://doi.org/10.1038/nature14539
    133 rdf:type schema:CreativeWork
    134 sg:pub.10.1140/epjds/s13688-017-0109-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091132845
    135 https://doi.org/10.1140/epjds/s13688-017-0109-5
    136 rdf:type schema:CreativeWork
    137 sg:pub.10.1186/s40064-016-2022-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1017028106
    138 https://doi.org/10.1186/s40064-016-2022-y
    139 rdf:type schema:CreativeWork
    140 grid-institutes:grid.5290.e schema:alternateName Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan
    141 schema:name Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan
    142 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...