Studying the plasticity in deep convolutional neural networks using random pruning View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2019-01-20

AUTHORS

Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran

ABSTRACT

Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document}-norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25–50% filters from deep CNNs we are able to obtain the same performance as obtained by using state-of-the-art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real-world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class-specific pruning and show that even here a random pruning strategy gives close to state-of-the-art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection and image segmentation. We show that using a simple random pruning strategy, we can achieve significant speedup in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster-RCNN model. Similarly, we show that the performance of a pruned segmentation network is actually very similar to that of the original unpruned SegNet. More... »

PAGES

203-216

References to SciGraph publications

  • 2016-09-17. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks in COMPUTER VISION – ECCV 2016
  • 2007-01-01. Optimal Brain Surgeon for General Dynamic Neural Networks in PROGRESS IN ARTIFICIAL INTELLIGENCE
  • 2015-04-11. ImageNet Large Scale Visual Recognition Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2009-09-09. The Pascal Visual Object Classes (VOC) Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2016-09-17. SSD: Single Shot MultiBox Detector in COMPUTER VISION – ECCV 2016
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-018-01001-9

    DOI

    http://dx.doi.org/10.1007/s00138-018-01001-9

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1111570790


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India", 
              "id": "http://www.grid.ac/institutes/grid.417969.4", 
              "name": [
                "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Mittal", 
            "givenName": "Deepak", 
            "id": "sg:person.012242306142.47", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012242306142.47"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India", 
              "id": "http://www.grid.ac/institutes/grid.417969.4", 
              "name": [
                "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Bhardwaj", 
            "givenName": "Shweta", 
            "id": "sg:person.014454525376.34", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014454525376.34"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India", 
              "id": "http://www.grid.ac/institutes/grid.417969.4", 
              "name": [
                "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Khapra", 
            "givenName": "Mitesh M.", 
            "id": "sg:person.01233542565.06", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01233542565.06"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India", 
              "id": "http://www.grid.ac/institutes/grid.417969.4", 
              "name": [
                "Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ravindran", 
            "givenName": "Balaraman", 
            "id": "sg:person.015501102741.39", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015501102741.39"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s11263-015-0816-y", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1009767488", 
              "https://doi.org/10.1007/s11263-015-0816-y"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46493-0_32", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005632531", 
              "https://doi.org/10.1007/978-3-319-46493-0_32"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46448-0_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017177111", 
              "https://doi.org/10.1007/978-3-319-46448-0_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-77002-2_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004228400", 
              "https://doi.org/10.1007/978-3-540-77002-2_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-009-0275-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014796149", 
              "https://doi.org/10.1007/s11263-009-0275-4"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-01-20", 
        "datePublishedReg": "2019-01-20", 
        "description": "Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$l_1$$\\end{document}-norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25\u201350% filters from deep CNNs we are able to obtain the same performance as obtained by using state-of-the-art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real-world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class-specific pruning and show that even here a random pruning strategy gives close to state-of-the-art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection and image segmentation. We show that using a simple random pruning strategy, we can achieve significant speedup in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster-RCNN model. Similarly, we show that the performance of a pruned segmentation network is actually very similar to that of the original unpruned SegNet.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-018-01001-9", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "2", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "30"
          }
        ], 
        "keywords": [
          "specific criteria", 
          "inherent plasticity", 
          "plasticity", 
          "criteria", 
          "rest", 
          "remainder", 
          "strategies", 
          "random pruning", 
          "evaluation", 
          "detection", 
          "loss", 
          "results", 
          "exhaustive evaluation", 
          "certain criteria", 
          "classification", 
          "time", 
          "intention", 
          "test time", 
          "unpruned network", 
          "state", 
          "class", 
          "ImageNet classes", 
          "method", 
          "Faster-RCNN model", 
          "model", 
          "approach", 
          "pruning", 
          "task", 
          "claims", 
          "work", 
          "ResNet-50", 
          "accuracy", 
          "performance", 
          "experiments", 
          "deep convolutional neural network", 
          "convolutional neural network", 
          "network", 
          "VGG-16", 
          "neural network", 
          "comparable performance", 
          "dataset", 
          "small set", 
          "same accuracy", 
          "filter", 
          "counterintuitive result", 
          "set", 
          "scenarios", 
          "new benchmark dataset", 
          "segmentation", 
          "idea", 
          "art pruning methods", 
          "deep neural networks", 
          "real-world scenarios", 
          "segmentation network", 
          "same performance", 
          "SegNet", 
          "ImageNet", 
          "image segmentation", 
          "image classification", 
          "pruning strategy", 
          "object detection", 
          "computation", 
          "pruning method", 
          "benchmark datasets", 
          "art performance", 
          "significant speedup", 
          "key idea", 
          "speedup", 
          "original unpruned network", 
          "such class-specific pruning", 
          "class-specific pruning", 
          "random pruning strategy", 
          "simple random pruning strategy", 
          "original Faster-RCNN model", 
          "original unpruned SegNet", 
          "unpruned SegNet"
        ], 
        "name": "Studying the plasticity in deep convolutional neural networks using random pruning", 
        "pagination": "203-216", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1111570790"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-018-01001-9"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-018-01001-9", 
          "https://app.dimensions.ai/details/publication/pub.1111570790"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2021-12-01T19:44", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20211201/entities/gbq_results/article/article_796.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-018-01001-9"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-018-01001-9'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-018-01001-9'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-018-01001-9'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-018-01001-9'


     

    This table displays all metadata directly associated to this object as RDF triples.

    175 TRIPLES      22 PREDICATES      106 URIs      93 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-018-01001-9 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nda5273828a3c43a1a0edccda4b562f7e
    4 schema:citation sg:pub.10.1007/978-3-319-46448-0_2
    5 sg:pub.10.1007/978-3-319-46493-0_32
    6 sg:pub.10.1007/978-3-540-77002-2_2
    7 sg:pub.10.1007/s11263-009-0275-4
    8 sg:pub.10.1007/s11263-015-0816-y
    9 schema:datePublished 2019-01-20
    10 schema:datePublishedReg 2019-01-20
    11 schema:description Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document}-norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25–50% filters from deep CNNs we are able to obtain the same performance as obtained by using state-of-the-art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real-world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class-specific pruning and show that even here a random pruning strategy gives close to state-of-the-art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection and image segmentation. We show that using a simple random pruning strategy, we can achieve significant speedup in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster-RCNN model. Similarly, we show that the performance of a pruned segmentation network is actually very similar to that of the original unpruned SegNet.
    12 schema:genre article
    13 schema:inLanguage en
    14 schema:isAccessibleForFree true
    15 schema:isPartOf N8846a4352b5e4a8bbce116c8dbc2d5c5
    16 N918546cce6f5444d9a255cd6db4a7b11
    17 sg:journal.1045266
    18 schema:keywords Faster-RCNN model
    19 ImageNet
    20 ImageNet classes
    21 ResNet-50
    22 SegNet
    23 VGG-16
    24 accuracy
    25 approach
    26 art performance
    27 art pruning methods
    28 benchmark datasets
    29 certain criteria
    30 claims
    31 class
    32 class-specific pruning
    33 classification
    34 comparable performance
    35 computation
    36 convolutional neural network
    37 counterintuitive result
    38 criteria
    39 dataset
    40 deep convolutional neural network
    41 deep neural networks
    42 detection
    43 evaluation
    44 exhaustive evaluation
    45 experiments
    46 filter
    47 idea
    48 image classification
    49 image segmentation
    50 inherent plasticity
    51 intention
    52 key idea
    53 loss
    54 method
    55 model
    56 network
    57 neural network
    58 new benchmark dataset
    59 object detection
    60 original Faster-RCNN model
    61 original unpruned SegNet
    62 original unpruned network
    63 performance
    64 plasticity
    65 pruning
    66 pruning method
    67 pruning strategy
    68 random pruning
    69 random pruning strategy
    70 real-world scenarios
    71 remainder
    72 rest
    73 results
    74 same accuracy
    75 same performance
    76 scenarios
    77 segmentation
    78 segmentation network
    79 set
    80 significant speedup
    81 simple random pruning strategy
    82 small set
    83 specific criteria
    84 speedup
    85 state
    86 strategies
    87 such class-specific pruning
    88 task
    89 test time
    90 time
    91 unpruned SegNet
    92 unpruned network
    93 work
    94 schema:name Studying the plasticity in deep convolutional neural networks using random pruning
    95 schema:pagination 203-216
    96 schema:productId N551d17a1c28b4864a6dd93ed231661f0
    97 Na19bf126437b47c0b3e78c1facaa7863
    98 schema:sameAs https://app.dimensions.ai/details/publication/pub.1111570790
    99 https://doi.org/10.1007/s00138-018-01001-9
    100 schema:sdDatePublished 2021-12-01T19:44
    101 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    102 schema:sdPublisher N1aeab236dd334d0aa397987bda21a399
    103 schema:url https://doi.org/10.1007/s00138-018-01001-9
    104 sgo:license sg:explorer/license/
    105 sgo:sdDataset articles
    106 rdf:type schema:ScholarlyArticle
    107 N01057f5b9707463b8bbc1ca1afa40f30 rdf:first sg:person.014454525376.34
    108 rdf:rest N4b05dea16fd2477484d75a90cd2cc790
    109 N1aeab236dd334d0aa397987bda21a399 schema:name Springer Nature - SN SciGraph project
    110 rdf:type schema:Organization
    111 N4b05dea16fd2477484d75a90cd2cc790 rdf:first sg:person.01233542565.06
    112 rdf:rest N4c8e602c6d0a4f28a38c40136b8ee7e8
    113 N4c8e602c6d0a4f28a38c40136b8ee7e8 rdf:first sg:person.015501102741.39
    114 rdf:rest rdf:nil
    115 N551d17a1c28b4864a6dd93ed231661f0 schema:name doi
    116 schema:value 10.1007/s00138-018-01001-9
    117 rdf:type schema:PropertyValue
    118 N8846a4352b5e4a8bbce116c8dbc2d5c5 schema:issueNumber 2
    119 rdf:type schema:PublicationIssue
    120 N918546cce6f5444d9a255cd6db4a7b11 schema:volumeNumber 30
    121 rdf:type schema:PublicationVolume
    122 Na19bf126437b47c0b3e78c1facaa7863 schema:name dimensions_id
    123 schema:value pub.1111570790
    124 rdf:type schema:PropertyValue
    125 Nda5273828a3c43a1a0edccda4b562f7e rdf:first sg:person.012242306142.47
    126 rdf:rest N01057f5b9707463b8bbc1ca1afa40f30
    127 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    128 schema:name Information and Computing Sciences
    129 rdf:type schema:DefinedTerm
    130 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    131 schema:name Artificial Intelligence and Image Processing
    132 rdf:type schema:DefinedTerm
    133 sg:journal.1045266 schema:issn 0932-8092
    134 1432-1769
    135 schema:name Machine Vision and Applications
    136 schema:publisher Springer Nature
    137 rdf:type schema:Periodical
    138 sg:person.012242306142.47 schema:affiliation grid-institutes:grid.417969.4
    139 schema:familyName Mittal
    140 schema:givenName Deepak
    141 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012242306142.47
    142 rdf:type schema:Person
    143 sg:person.01233542565.06 schema:affiliation grid-institutes:grid.417969.4
    144 schema:familyName Khapra
    145 schema:givenName Mitesh M.
    146 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01233542565.06
    147 rdf:type schema:Person
    148 sg:person.014454525376.34 schema:affiliation grid-institutes:grid.417969.4
    149 schema:familyName Bhardwaj
    150 schema:givenName Shweta
    151 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014454525376.34
    152 rdf:type schema:Person
    153 sg:person.015501102741.39 schema:affiliation grid-institutes:grid.417969.4
    154 schema:familyName Ravindran
    155 schema:givenName Balaraman
    156 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015501102741.39
    157 rdf:type schema:Person
    158 sg:pub.10.1007/978-3-319-46448-0_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017177111
    159 https://doi.org/10.1007/978-3-319-46448-0_2
    160 rdf:type schema:CreativeWork
    161 sg:pub.10.1007/978-3-319-46493-0_32 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005632531
    162 https://doi.org/10.1007/978-3-319-46493-0_32
    163 rdf:type schema:CreativeWork
    164 sg:pub.10.1007/978-3-540-77002-2_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004228400
    165 https://doi.org/10.1007/978-3-540-77002-2_2
    166 rdf:type schema:CreativeWork
    167 sg:pub.10.1007/s11263-009-0275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014796149
    168 https://doi.org/10.1007/s11263-009-0275-4
    169 rdf:type schema:CreativeWork
    170 sg:pub.10.1007/s11263-015-0816-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1009767488
    171 https://doi.org/10.1007/s11263-015-0816-y
    172 rdf:type schema:CreativeWork
    173 grid-institutes:grid.417969.4 schema:alternateName Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India
    174 schema:name Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI), Indian Institute of Technology Madras, Chennai, India
    175 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...