Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2019-02-15

AUTHORS

Kaisa Liimatainen, Lauri Kananen, Leena Latonen, Pekka Ruusuvuori

ABSTRACT

BACKGROUND: Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines. RESULTS: Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F1-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent. CONCLUSIONS: With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy. More... »

PAGES

80

References to SciGraph publications

  • 2017-09-13. Domain-Adversarial Training of Neural Networks in DOMAIN ADAPTATION IN COMPUTER VISION APPLICATIONS
  • 2008-01-01. Automatic Segmentation of Unstained Living Cells in Bright-Field Microscope Images in ADVANCES IN MASS DATA ANALYSIS OF IMAGES AND SIGNALS IN MEDICINE, BIOTECHNOLOGY, CHEMISTRY AND FOOD INDUSTRY
  • 2006-10-31. CellProfiler: image analysis software for identifying and quantifying cell phenotypes in GENOME BIOLOGY
  • 2019-05-29. Class-Agnostic Counting in COMPUTER VISION – ACCV 2018
  • 2017-06-29. Assessing phototoxicity in live fluorescence imaging in NATURE METHODS
  • 2016-12-07. Imagining the future of bioimage analysis in NATURE BIOTECHNOLOGY
  • 2009-10-23. A theory of learning from different domains in MACHINE LEARNING
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • 2011-05-05. Automatic segmentation of adherent biological cell boundaries and nuclei from brightfield microscopy images in MACHINE VISION AND APPLICATIONS
  • 2017-08-10. Automated Training of Deep Convolutional Neural Networks for Cell Segmentation in SCIENTIFIC REPORTS
  • 2017-06-22. Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks in MEDICAL IMAGE UNDERSTANDING AND ANALYSIS
  • 2013-10-04. An automatic method for robust and fast cell detection in bright field images from high-throughput microscopy in BMC BIOINFORMATICS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z

    DOI

    http://dx.doi.org/10.1186/s12859-019-2605-z

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1112168759

    PUBMED

    https://www.ncbi.nlm.nih.gov/pubmed/30767778


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Deep Learning", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Humans", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Image Processing, Computer-Assisted", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Male", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Prostatic Neoplasms", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Tumor Cells, Cultured", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
              "id": "http://www.grid.ac/institutes/grid.502801.e", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Liimatainen", 
            "givenName": "Kaisa", 
            "id": "sg:person.014542550263.89", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014542550263.89"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
              "id": "http://www.grid.ac/institutes/grid.502801.e", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kananen", 
            "givenName": "Lauri", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland", 
              "id": "http://www.grid.ac/institutes/grid.9668.1", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
                "Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Latonen", 
            "givenName": "Leena", 
            "id": "sg:person.01273350374.27", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01273350374.27"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
              "id": "http://www.grid.ac/institutes/grid.502801.e", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ruusuvuori", 
            "givenName": "Pekka", 
            "id": "sg:person.01244150266.41", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01244150266.41"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-540-70715-8_13", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1026523454", 
              "https://doi.org/10.1007/978-3-540-70715-8_13"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10994-009-5152-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029224515", 
              "https://doi.org/10.1007/s10994-009-5152-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nmeth.4344", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1090281650", 
              "https://doi.org/10.1038/nmeth.4344"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00138-011-0337-9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021554950", 
              "https://doi.org/10.1007/s00138-011-0337-9"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nbt.3722", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014155836", 
              "https://doi.org/10.1038/nbt.3722"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-20893-6_42", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1115969961", 
              "https://doi.org/10.1007/978-3-030-20893-6_42"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/gb-2006-7-10-r100", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040889351", 
              "https://doi.org/10.1186/gb-2006-7-10-r100"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/s41598-017-07599-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091142206", 
              "https://doi.org/10.1038/s41598-017-07599-6"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-58347-1_10", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091568302", 
              "https://doi.org/10.1007/978-3-319-58347-1_10"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/1471-2105-14-297", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1031327902", 
              "https://doi.org/10.1186/1471-2105-14-297"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-60964-5_44", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1086148497", 
              "https://doi.org/10.1007/978-3-319-60964-5_44"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-02-15", 
        "datePublishedReg": "2019-02-15", 
        "description": "BACKGROUND: Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines.\nRESULTS: Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F1-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent.\nCONCLUSIONS: With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s12859-019-2605-z", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8842214", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.8837871", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.8837780", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1023786", 
            "issn": [
              "1471-2105"
            ], 
            "name": "BMC Bioinformatics", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "20"
          }
        ], 
        "keywords": [
          "unsupervised domain adaptation", 
          "domain adaptation", 
          "training data", 
          "target domain", 
          "unsupervised domain adaptation method", 
          "deep learning models", 
          "domain adaptation methods", 
          "net-based model", 
          "high accuracy", 
          "deep learning", 
          "unseen domains", 
          "automatic generation", 
          "generalization capability", 
          "manual annotation", 
          "detection accuracy", 
          "learning model", 
          "supervised training", 
          "data transformation", 
          "adaptation method", 
          "cell detection", 
          "training domain", 
          "low recall", 
          "new domain", 
          "high precision", 
          "brightfield images", 
          "mean accuracy", 
          "accurate detection", 
          "accuracy", 
          "biomedical research applications", 
          "research applications", 
          "detection", 
          "domain", 
          "annotation", 
          "stack", 
          "learning", 
          "precision", 
          "images", 
          "model", 
          "capability", 
          "data", 
          "method", 
          "counting method", 
          "recall", 
          "certain degree", 
          "adaptation", 
          "applications", 
          "improvement", 
          "training", 
          "cell counting method", 
          "prediction", 
          "counting", 
          "highest improvement", 
          "generation", 
          "focal plane", 
          "transformation", 
          "part", 
          "cell counting", 
          "lines", 
          "cell growth analysis", 
          "analysis", 
          "plane", 
          "degree", 
          "scores", 
          "growth analysis", 
          "percent", 
          "mean improvement", 
          "culture", 
          "cells", 
          "different cell lines", 
          "PC-3 cell line", 
          "new cell line", 
          "BT-474", 
          "cell cultures", 
          "cell lines", 
          "LNCaP", 
          "accurate brightfield-based cell counting methods", 
          "brightfield-based cell counting methods", 
          "unseen cell lines", 
          "similar unseen cell lines", 
          "training data transformations", 
          "iterative unsupervised domain adaptation method", 
          "consecutive focal planes", 
          "brightfield image z", 
          "image z", 
          "generalized cell detection", 
          "single manual annotation", 
          "iterative domain adaptation", 
          "Iterative unsupervised domain adaptation", 
          "brightfield z"
        ], 
        "name": "Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks", 
        "pagination": "80", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1112168759"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s12859-019-2605-z"
            ]
          }, 
          {
            "name": "pubmed_id", 
            "type": "PropertyValue", 
            "value": [
              "30767778"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s12859-019-2605-z", 
          "https://app.dimensions.ai/details/publication/pub.1112168759"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2021-12-01T19:46", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20211201/entities/gbq_results/article/article_831.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s12859-019-2605-z"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'


     

    This table displays all metadata directly associated to this object as RDF triples.

    252 TRIPLES      22 PREDICATES      133 URIs      113 LITERALS      13 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s12859-019-2605-z schema:about N18176dd821cd4c66b9841c2a67e4c84e
    2 N25df1d8513ad4c42a1aa5ab501bf9c28
    3 N3cd3c1864dfa4493aa1a2bf79c752431
    4 N9162d434a7704b0380e728e0e26ee1c8
    5 Nb6c7150d07464e9e96d933eb9830e94d
    6 Ne87e35d5fb024dd1a4b4cc4f4209071c
    7 anzsrc-for:08
    8 anzsrc-for:0801
    9 schema:author N388999a74872425a94d2889556919341
    10 schema:citation sg:pub.10.1007/978-3-030-20893-6_42
    11 sg:pub.10.1007/978-3-319-24574-4_28
    12 sg:pub.10.1007/978-3-319-58347-1_10
    13 sg:pub.10.1007/978-3-319-60964-5_44
    14 sg:pub.10.1007/978-3-540-70715-8_13
    15 sg:pub.10.1007/s00138-011-0337-9
    16 sg:pub.10.1007/s10994-009-5152-4
    17 sg:pub.10.1038/nbt.3722
    18 sg:pub.10.1038/nmeth.4344
    19 sg:pub.10.1038/s41598-017-07599-6
    20 sg:pub.10.1186/1471-2105-14-297
    21 sg:pub.10.1186/gb-2006-7-10-r100
    22 schema:datePublished 2019-02-15
    23 schema:datePublishedReg 2019-02-15
    24 schema:description BACKGROUND: Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines. RESULTS: Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F<sub>1</sub>-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent. CONCLUSIONS: With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy.
    25 schema:genre article
    26 schema:inLanguage en
    27 schema:isAccessibleForFree true
    28 schema:isPartOf Na962c2fa149c40b1a67fbdd6a89e68fe
    29 Ne99b7c903ea946879fb30e1ee1b3d9ea
    30 sg:journal.1023786
    31 schema:keywords BT-474
    32 Iterative unsupervised domain adaptation
    33 LNCaP
    34 PC-3 cell line
    35 accuracy
    36 accurate brightfield-based cell counting methods
    37 accurate detection
    38 adaptation
    39 adaptation method
    40 analysis
    41 annotation
    42 applications
    43 automatic generation
    44 biomedical research applications
    45 brightfield image z
    46 brightfield images
    47 brightfield z
    48 brightfield-based cell counting methods
    49 capability
    50 cell counting
    51 cell counting method
    52 cell cultures
    53 cell detection
    54 cell growth analysis
    55 cell lines
    56 cells
    57 certain degree
    58 consecutive focal planes
    59 counting
    60 counting method
    61 culture
    62 data
    63 data transformation
    64 deep learning
    65 deep learning models
    66 degree
    67 detection
    68 detection accuracy
    69 different cell lines
    70 domain
    71 domain adaptation
    72 domain adaptation methods
    73 focal plane
    74 generalization capability
    75 generalized cell detection
    76 generation
    77 growth analysis
    78 high accuracy
    79 high precision
    80 highest improvement
    81 image z
    82 images
    83 improvement
    84 iterative domain adaptation
    85 iterative unsupervised domain adaptation method
    86 learning
    87 learning model
    88 lines
    89 low recall
    90 manual annotation
    91 mean accuracy
    92 mean improvement
    93 method
    94 model
    95 net-based model
    96 new cell line
    97 new domain
    98 part
    99 percent
    100 plane
    101 precision
    102 prediction
    103 recall
    104 research applications
    105 scores
    106 similar unseen cell lines
    107 single manual annotation
    108 stack
    109 supervised training
    110 target domain
    111 training
    112 training data
    113 training data transformations
    114 training domain
    115 transformation
    116 unseen cell lines
    117 unseen domains
    118 unsupervised domain adaptation
    119 unsupervised domain adaptation method
    120 schema:name Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks
    121 schema:pagination 80
    122 schema:productId N19d2f9cf43274b08a1bfa1f3bd36d342
    123 N512ad5f0c3ef49bf94eb39b5660f3435
    124 Nfc8dc765c5c64fe297ba0c5baaa3018c
    125 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112168759
    126 https://doi.org/10.1186/s12859-019-2605-z
    127 schema:sdDatePublished 2021-12-01T19:46
    128 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    129 schema:sdPublisher Nabccc003a9084225a71806acf64773f5
    130 schema:url https://doi.org/10.1186/s12859-019-2605-z
    131 sgo:license sg:explorer/license/
    132 sgo:sdDataset articles
    133 rdf:type schema:ScholarlyArticle
    134 N170051a78a4e46bf88a6af462e563d86 rdf:first sg:person.01273350374.27
    135 rdf:rest Ned06e96e42cf4b98b7571f52791b134c
    136 N18176dd821cd4c66b9841c2a67e4c84e schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    137 schema:name Tumor Cells, Cultured
    138 rdf:type schema:DefinedTerm
    139 N19d2f9cf43274b08a1bfa1f3bd36d342 schema:name pubmed_id
    140 schema:value 30767778
    141 rdf:type schema:PropertyValue
    142 N25df1d8513ad4c42a1aa5ab501bf9c28 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    143 schema:name Male
    144 rdf:type schema:DefinedTerm
    145 N388999a74872425a94d2889556919341 rdf:first sg:person.014542550263.89
    146 rdf:rest Nfa771d06b5304ef78ad0a90aef356822
    147 N3cd3c1864dfa4493aa1a2bf79c752431 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    148 schema:name Prostatic Neoplasms
    149 rdf:type schema:DefinedTerm
    150 N512ad5f0c3ef49bf94eb39b5660f3435 schema:name doi
    151 schema:value 10.1186/s12859-019-2605-z
    152 rdf:type schema:PropertyValue
    153 N5dedb0c462b8411889fe5f1fa06ce7e9 schema:affiliation grid-institutes:grid.502801.e
    154 schema:familyName Kananen
    155 schema:givenName Lauri
    156 rdf:type schema:Person
    157 N9162d434a7704b0380e728e0e26ee1c8 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    158 schema:name Deep Learning
    159 rdf:type schema:DefinedTerm
    160 Na962c2fa149c40b1a67fbdd6a89e68fe schema:issueNumber 1
    161 rdf:type schema:PublicationIssue
    162 Nabccc003a9084225a71806acf64773f5 schema:name Springer Nature - SN SciGraph project
    163 rdf:type schema:Organization
    164 Nb6c7150d07464e9e96d933eb9830e94d schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    165 schema:name Image Processing, Computer-Assisted
    166 rdf:type schema:DefinedTerm
    167 Ne87e35d5fb024dd1a4b4cc4f4209071c schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    168 schema:name Humans
    169 rdf:type schema:DefinedTerm
    170 Ne99b7c903ea946879fb30e1ee1b3d9ea schema:volumeNumber 20
    171 rdf:type schema:PublicationVolume
    172 Ned06e96e42cf4b98b7571f52791b134c rdf:first sg:person.01244150266.41
    173 rdf:rest rdf:nil
    174 Nfa771d06b5304ef78ad0a90aef356822 rdf:first N5dedb0c462b8411889fe5f1fa06ce7e9
    175 rdf:rest N170051a78a4e46bf88a6af462e563d86
    176 Nfc8dc765c5c64fe297ba0c5baaa3018c schema:name dimensions_id
    177 schema:value pub.1112168759
    178 rdf:type schema:PropertyValue
    179 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    180 schema:name Information and Computing Sciences
    181 rdf:type schema:DefinedTerm
    182 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    183 schema:name Artificial Intelligence and Image Processing
    184 rdf:type schema:DefinedTerm
    185 sg:grant.8837780 http://pending.schema.org/fundedItem sg:pub.10.1186/s12859-019-2605-z
    186 rdf:type schema:MonetaryGrant
    187 sg:grant.8837871 http://pending.schema.org/fundedItem sg:pub.10.1186/s12859-019-2605-z
    188 rdf:type schema:MonetaryGrant
    189 sg:grant.8842214 http://pending.schema.org/fundedItem sg:pub.10.1186/s12859-019-2605-z
    190 rdf:type schema:MonetaryGrant
    191 sg:journal.1023786 schema:issn 1471-2105
    192 schema:name BMC Bioinformatics
    193 schema:publisher Springer Nature
    194 rdf:type schema:Periodical
    195 sg:person.01244150266.41 schema:affiliation grid-institutes:grid.502801.e
    196 schema:familyName Ruusuvuori
    197 schema:givenName Pekka
    198 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01244150266.41
    199 rdf:type schema:Person
    200 sg:person.01273350374.27 schema:affiliation grid-institutes:grid.9668.1
    201 schema:familyName Latonen
    202 schema:givenName Leena
    203 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01273350374.27
    204 rdf:type schema:Person
    205 sg:person.014542550263.89 schema:affiliation grid-institutes:grid.502801.e
    206 schema:familyName Liimatainen
    207 schema:givenName Kaisa
    208 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014542550263.89
    209 rdf:type schema:Person
    210 sg:pub.10.1007/978-3-030-20893-6_42 schema:sameAs https://app.dimensions.ai/details/publication/pub.1115969961
    211 https://doi.org/10.1007/978-3-030-20893-6_42
    212 rdf:type schema:CreativeWork
    213 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    214 https://doi.org/10.1007/978-3-319-24574-4_28
    215 rdf:type schema:CreativeWork
    216 sg:pub.10.1007/978-3-319-58347-1_10 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091568302
    217 https://doi.org/10.1007/978-3-319-58347-1_10
    218 rdf:type schema:CreativeWork
    219 sg:pub.10.1007/978-3-319-60964-5_44 schema:sameAs https://app.dimensions.ai/details/publication/pub.1086148497
    220 https://doi.org/10.1007/978-3-319-60964-5_44
    221 rdf:type schema:CreativeWork
    222 sg:pub.10.1007/978-3-540-70715-8_13 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026523454
    223 https://doi.org/10.1007/978-3-540-70715-8_13
    224 rdf:type schema:CreativeWork
    225 sg:pub.10.1007/s00138-011-0337-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021554950
    226 https://doi.org/10.1007/s00138-011-0337-9
    227 rdf:type schema:CreativeWork
    228 sg:pub.10.1007/s10994-009-5152-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029224515
    229 https://doi.org/10.1007/s10994-009-5152-4
    230 rdf:type schema:CreativeWork
    231 sg:pub.10.1038/nbt.3722 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014155836
    232 https://doi.org/10.1038/nbt.3722
    233 rdf:type schema:CreativeWork
    234 sg:pub.10.1038/nmeth.4344 schema:sameAs https://app.dimensions.ai/details/publication/pub.1090281650
    235 https://doi.org/10.1038/nmeth.4344
    236 rdf:type schema:CreativeWork
    237 sg:pub.10.1038/s41598-017-07599-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091142206
    238 https://doi.org/10.1038/s41598-017-07599-6
    239 rdf:type schema:CreativeWork
    240 sg:pub.10.1186/1471-2105-14-297 schema:sameAs https://app.dimensions.ai/details/publication/pub.1031327902
    241 https://doi.org/10.1186/1471-2105-14-297
    242 rdf:type schema:CreativeWork
    243 sg:pub.10.1186/gb-2006-7-10-r100 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040889351
    244 https://doi.org/10.1186/gb-2006-7-10-r100
    245 rdf:type schema:CreativeWork
    246 grid-institutes:grid.502801.e schema:alternateName Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
    247 schema:name Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
    248 rdf:type schema:Organization
    249 grid-institutes:grid.9668.1 schema:alternateName Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
    250 schema:name Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
    251 Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
    252 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...