Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2019-02-15

AUTHORS

Kaisa Liimatainen, Lauri Kananen, Leena Latonen, Pekka Ruusuvuori

ABSTRACT

BACKGROUND: Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines. RESULTS: Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F1-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent. CONCLUSIONS: With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy. More... »

PAGES

80

References to SciGraph publications

  • 2017-09-13. Domain-Adversarial Training of Neural Networks in DOMAIN ADAPTATION IN COMPUTER VISION APPLICATIONS
  • 2008-01-01. Automatic Segmentation of Unstained Living Cells in Bright-Field Microscope Images in ADVANCES IN MASS DATA ANALYSIS OF IMAGES AND SIGNALS IN MEDICINE, BIOTECHNOLOGY, CHEMISTRY AND FOOD INDUSTRY
  • 2006-10-31. CellProfiler: image analysis software for identifying and quantifying cell phenotypes in GENOME BIOLOGY
  • 2019-05-29. Class-Agnostic Counting in COMPUTER VISION – ACCV 2018
  • 2017-06-29. Assessing phototoxicity in live fluorescence imaging in NATURE METHODS
  • 2016-12-07. Imagining the future of bioimage analysis in NATURE BIOTECHNOLOGY
  • 2009-10-23. A theory of learning from different domains in MACHINE LEARNING
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • 2011-05-05. Automatic segmentation of adherent biological cell boundaries and nuclei from brightfield microscopy images in MACHINE VISION AND APPLICATIONS
  • 2017-08-10. Automated Training of Deep Convolutional Neural Networks for Cell Segmentation in SCIENTIFIC REPORTS
  • 2017-06-22. Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks in MEDICAL IMAGE UNDERSTANDING AND ANALYSIS
  • 2013-10-04. An automatic method for robust and fast cell detection in bright field images from high-throughput microscopy in BMC BIOINFORMATICS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z

    DOI

    http://dx.doi.org/10.1186/s12859-019-2605-z

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1112168759

    PUBMED

    https://www.ncbi.nlm.nih.gov/pubmed/30767778


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Deep Learning", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Humans", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Image Processing, Computer-Assisted", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Male", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Prostatic Neoplasms", 
            "type": "DefinedTerm"
          }, 
          {
            "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
            "name": "Tumor Cells, Cultured", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
              "id": "http://www.grid.ac/institutes/grid.502801.e", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Liimatainen", 
            "givenName": "Kaisa", 
            "id": "sg:person.014542550263.89", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014542550263.89"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
              "id": "http://www.grid.ac/institutes/grid.502801.e", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kananen", 
            "givenName": "Lauri", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland", 
              "id": "http://www.grid.ac/institutes/grid.9668.1", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
                "Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Latonen", 
            "givenName": "Leena", 
            "id": "sg:person.01273350374.27", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01273350374.27"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland", 
              "id": "http://www.grid.ac/institutes/grid.502801.e", 
              "name": [
                "Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ruusuvuori", 
            "givenName": "Pekka", 
            "id": "sg:person.01244150266.41", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01244150266.41"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-60964-5_44", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1086148497", 
              "https://doi.org/10.1007/978-3-319-60964-5_44"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nbt.3722", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014155836", 
              "https://doi.org/10.1038/nbt.3722"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/nmeth.4344", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1090281650", 
              "https://doi.org/10.1038/nmeth.4344"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1038/s41598-017-07599-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091142206", 
              "https://doi.org/10.1038/s41598-017-07599-6"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-58347-1_10", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091568302", 
              "https://doi.org/10.1007/978-3-319-58347-1_10"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/gb-2006-7-10-r100", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040889351", 
              "https://doi.org/10.1186/gb-2006-7-10-r100"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-70715-8_13", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1026523454", 
              "https://doi.org/10.1007/978-3-540-70715-8_13"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-20893-6_42", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1115969961", 
              "https://doi.org/10.1007/978-3-030-20893-6_42"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/1471-2105-14-297", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1031327902", 
              "https://doi.org/10.1186/1471-2105-14-297"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10994-009-5152-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029224515", 
              "https://doi.org/10.1007/s10994-009-5152-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00138-011-0337-9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021554950", 
              "https://doi.org/10.1007/s00138-011-0337-9"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-02-15", 
        "datePublishedReg": "2019-02-15", 
        "description": "BACKGROUND: Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines.\nRESULTS: Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F1-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent.\nCONCLUSIONS: With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s12859-019-2605-z", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8842214", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.8837871", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.8837780", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1023786", 
            "issn": [
              "1471-2105"
            ], 
            "name": "BMC Bioinformatics", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "20"
          }
        ], 
        "keywords": [
          "unsupervised domain adaptation", 
          "domain adaptation", 
          "training data", 
          "target domain", 
          "unsupervised domain adaptation method", 
          "deep learning models", 
          "domain adaptation methods", 
          "net-based model", 
          "high accuracy", 
          "deep learning", 
          "unseen domains", 
          "automatic generation", 
          "generalization capability", 
          "manual annotation", 
          "detection accuracy", 
          "learning model", 
          "supervised training", 
          "data transformation", 
          "adaptation method", 
          "cell detection", 
          "training domain", 
          "low recall", 
          "new domain", 
          "high precision", 
          "brightfield images", 
          "mean accuracy", 
          "accurate detection", 
          "accuracy", 
          "biomedical research applications", 
          "research applications", 
          "detection", 
          "domain", 
          "annotation", 
          "stack", 
          "learning", 
          "precision", 
          "images", 
          "model", 
          "capability", 
          "data", 
          "method", 
          "counting method", 
          "recall", 
          "certain degree", 
          "adaptation", 
          "applications", 
          "improvement", 
          "training", 
          "cell counting method", 
          "prediction", 
          "counting", 
          "highest improvement", 
          "generation", 
          "focal plane", 
          "transformation", 
          "part", 
          "cell counting", 
          "lines", 
          "cell growth analysis", 
          "analysis", 
          "plane", 
          "degree", 
          "scores", 
          "growth analysis", 
          "percent", 
          "mean improvement", 
          "culture", 
          "cells", 
          "different cell lines", 
          "PC-3 cell line", 
          "new cell line", 
          "BT-474", 
          "cell cultures", 
          "cell lines", 
          "LNCaP", 
          "accurate brightfield-based cell counting methods", 
          "brightfield-based cell counting methods", 
          "unseen cell lines", 
          "similar unseen cell lines", 
          "training data transformations", 
          "iterative unsupervised domain adaptation method", 
          "consecutive focal planes", 
          "brightfield image z", 
          "image z", 
          "generalized cell detection", 
          "single manual annotation", 
          "iterative domain adaptation", 
          "Iterative unsupervised domain adaptation", 
          "brightfield z"
        ], 
        "name": "Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks", 
        "pagination": "80", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1112168759"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s12859-019-2605-z"
            ]
          }, 
          {
            "name": "pubmed_id", 
            "type": "PropertyValue", 
            "value": [
              "30767778"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s12859-019-2605-z", 
          "https://app.dimensions.ai/details/publication/pub.1112168759"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T18:49", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_803.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s12859-019-2605-z"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s12859-019-2605-z'


     

    This table displays all metadata directly associated to this object as RDF triples.

    252 TRIPLES      22 PREDICATES      133 URIs      113 LITERALS      13 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s12859-019-2605-z schema:about N08e35380dce5403eb205c21d8dc01619
    2 N3ccc6a0a69b943cb8eb4a25f1482183c
    3 N85a6a16023f84222a71ae095c2897b16
    4 N88c1dc550be044f7b715fe4ff3536174
    5 N8f18b4d666fc42b88798b629d0b66152
    6 Necdd343a184d4a0eadf0c8cd8c8cf628
    7 anzsrc-for:08
    8 anzsrc-for:0801
    9 schema:author N3e2d8348b36f4211b7f02e914c680416
    10 schema:citation sg:pub.10.1007/978-3-030-20893-6_42
    11 sg:pub.10.1007/978-3-319-24574-4_28
    12 sg:pub.10.1007/978-3-319-58347-1_10
    13 sg:pub.10.1007/978-3-319-60964-5_44
    14 sg:pub.10.1007/978-3-540-70715-8_13
    15 sg:pub.10.1007/s00138-011-0337-9
    16 sg:pub.10.1007/s10994-009-5152-4
    17 sg:pub.10.1038/nbt.3722
    18 sg:pub.10.1038/nmeth.4344
    19 sg:pub.10.1038/s41598-017-07599-6
    20 sg:pub.10.1186/1471-2105-14-297
    21 sg:pub.10.1186/gb-2006-7-10-r100
    22 schema:datePublished 2019-02-15
    23 schema:datePublishedReg 2019-02-15
    24 schema:description BACKGROUND: Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines. RESULTS: Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F<sub>1</sub>-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent. CONCLUSIONS: With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy.
    25 schema:genre article
    26 schema:inLanguage en
    27 schema:isAccessibleForFree true
    28 schema:isPartOf Nb6e5a171f73d4069b2b27372e37f98e4
    29 Nc60bd86cbdcb482bb3ca8eeb976861a0
    30 sg:journal.1023786
    31 schema:keywords BT-474
    32 Iterative unsupervised domain adaptation
    33 LNCaP
    34 PC-3 cell line
    35 accuracy
    36 accurate brightfield-based cell counting methods
    37 accurate detection
    38 adaptation
    39 adaptation method
    40 analysis
    41 annotation
    42 applications
    43 automatic generation
    44 biomedical research applications
    45 brightfield image z
    46 brightfield images
    47 brightfield z
    48 brightfield-based cell counting methods
    49 capability
    50 cell counting
    51 cell counting method
    52 cell cultures
    53 cell detection
    54 cell growth analysis
    55 cell lines
    56 cells
    57 certain degree
    58 consecutive focal planes
    59 counting
    60 counting method
    61 culture
    62 data
    63 data transformation
    64 deep learning
    65 deep learning models
    66 degree
    67 detection
    68 detection accuracy
    69 different cell lines
    70 domain
    71 domain adaptation
    72 domain adaptation methods
    73 focal plane
    74 generalization capability
    75 generalized cell detection
    76 generation
    77 growth analysis
    78 high accuracy
    79 high precision
    80 highest improvement
    81 image z
    82 images
    83 improvement
    84 iterative domain adaptation
    85 iterative unsupervised domain adaptation method
    86 learning
    87 learning model
    88 lines
    89 low recall
    90 manual annotation
    91 mean accuracy
    92 mean improvement
    93 method
    94 model
    95 net-based model
    96 new cell line
    97 new domain
    98 part
    99 percent
    100 plane
    101 precision
    102 prediction
    103 recall
    104 research applications
    105 scores
    106 similar unseen cell lines
    107 single manual annotation
    108 stack
    109 supervised training
    110 target domain
    111 training
    112 training data
    113 training data transformations
    114 training domain
    115 transformation
    116 unseen cell lines
    117 unseen domains
    118 unsupervised domain adaptation
    119 unsupervised domain adaptation method
    120 schema:name Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks
    121 schema:pagination 80
    122 schema:productId N4e21b3e3636547fe9f901ac9a5ecc8a7
    123 N7f406b2c8066413ebe9919f2d53f3aff
    124 Nb13837883a9c405ea7fd92575ad34420
    125 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112168759
    126 https://doi.org/10.1186/s12859-019-2605-z
    127 schema:sdDatePublished 2022-01-01T18:49
    128 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    129 schema:sdPublisher Nefd2b80300c648f395dabb7f6b5a4e4f
    130 schema:url https://doi.org/10.1186/s12859-019-2605-z
    131 sgo:license sg:explorer/license/
    132 sgo:sdDataset articles
    133 rdf:type schema:ScholarlyArticle
    134 N08e35380dce5403eb205c21d8dc01619 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    135 schema:name Tumor Cells, Cultured
    136 rdf:type schema:DefinedTerm
    137 N0af2f849c4ad4f55afa70c40af7c8fc6 rdf:first sg:person.01273350374.27
    138 rdf:rest N14fb5464d9ff4a1791051d2e06456fff
    139 N14fb5464d9ff4a1791051d2e06456fff rdf:first sg:person.01244150266.41
    140 rdf:rest rdf:nil
    141 N3ccc6a0a69b943cb8eb4a25f1482183c schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    142 schema:name Humans
    143 rdf:type schema:DefinedTerm
    144 N3e2d8348b36f4211b7f02e914c680416 rdf:first sg:person.014542550263.89
    145 rdf:rest N79672c2474b446cf8366f4b5f3b8d4ab
    146 N4e21b3e3636547fe9f901ac9a5ecc8a7 schema:name dimensions_id
    147 schema:value pub.1112168759
    148 rdf:type schema:PropertyValue
    149 N5c76ab0c1b2441079800d47fef43f929 schema:affiliation grid-institutes:grid.502801.e
    150 schema:familyName Kananen
    151 schema:givenName Lauri
    152 rdf:type schema:Person
    153 N79672c2474b446cf8366f4b5f3b8d4ab rdf:first N5c76ab0c1b2441079800d47fef43f929
    154 rdf:rest N0af2f849c4ad4f55afa70c40af7c8fc6
    155 N7f406b2c8066413ebe9919f2d53f3aff schema:name doi
    156 schema:value 10.1186/s12859-019-2605-z
    157 rdf:type schema:PropertyValue
    158 N85a6a16023f84222a71ae095c2897b16 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    159 schema:name Male
    160 rdf:type schema:DefinedTerm
    161 N88c1dc550be044f7b715fe4ff3536174 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    162 schema:name Image Processing, Computer-Assisted
    163 rdf:type schema:DefinedTerm
    164 N8f18b4d666fc42b88798b629d0b66152 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    165 schema:name Deep Learning
    166 rdf:type schema:DefinedTerm
    167 Nb13837883a9c405ea7fd92575ad34420 schema:name pubmed_id
    168 schema:value 30767778
    169 rdf:type schema:PropertyValue
    170 Nb6e5a171f73d4069b2b27372e37f98e4 schema:volumeNumber 20
    171 rdf:type schema:PublicationVolume
    172 Nc60bd86cbdcb482bb3ca8eeb976861a0 schema:issueNumber 1
    173 rdf:type schema:PublicationIssue
    174 Necdd343a184d4a0eadf0c8cd8c8cf628 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
    175 schema:name Prostatic Neoplasms
    176 rdf:type schema:DefinedTerm
    177 Nefd2b80300c648f395dabb7f6b5a4e4f schema:name Springer Nature - SN SciGraph project
    178 rdf:type schema:Organization
    179 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    180 schema:name Information and Computing Sciences
    181 rdf:type schema:DefinedTerm
    182 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    183 schema:name Artificial Intelligence and Image Processing
    184 rdf:type schema:DefinedTerm
    185 sg:grant.8837780 http://pending.schema.org/fundedItem sg:pub.10.1186/s12859-019-2605-z
    186 rdf:type schema:MonetaryGrant
    187 sg:grant.8837871 http://pending.schema.org/fundedItem sg:pub.10.1186/s12859-019-2605-z
    188 rdf:type schema:MonetaryGrant
    189 sg:grant.8842214 http://pending.schema.org/fundedItem sg:pub.10.1186/s12859-019-2605-z
    190 rdf:type schema:MonetaryGrant
    191 sg:journal.1023786 schema:issn 1471-2105
    192 schema:name BMC Bioinformatics
    193 schema:publisher Springer Nature
    194 rdf:type schema:Periodical
    195 sg:person.01244150266.41 schema:affiliation grid-institutes:grid.502801.e
    196 schema:familyName Ruusuvuori
    197 schema:givenName Pekka
    198 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01244150266.41
    199 rdf:type schema:Person
    200 sg:person.01273350374.27 schema:affiliation grid-institutes:grid.9668.1
    201 schema:familyName Latonen
    202 schema:givenName Leena
    203 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01273350374.27
    204 rdf:type schema:Person
    205 sg:person.014542550263.89 schema:affiliation grid-institutes:grid.502801.e
    206 schema:familyName Liimatainen
    207 schema:givenName Kaisa
    208 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014542550263.89
    209 rdf:type schema:Person
    210 sg:pub.10.1007/978-3-030-20893-6_42 schema:sameAs https://app.dimensions.ai/details/publication/pub.1115969961
    211 https://doi.org/10.1007/978-3-030-20893-6_42
    212 rdf:type schema:CreativeWork
    213 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    214 https://doi.org/10.1007/978-3-319-24574-4_28
    215 rdf:type schema:CreativeWork
    216 sg:pub.10.1007/978-3-319-58347-1_10 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091568302
    217 https://doi.org/10.1007/978-3-319-58347-1_10
    218 rdf:type schema:CreativeWork
    219 sg:pub.10.1007/978-3-319-60964-5_44 schema:sameAs https://app.dimensions.ai/details/publication/pub.1086148497
    220 https://doi.org/10.1007/978-3-319-60964-5_44
    221 rdf:type schema:CreativeWork
    222 sg:pub.10.1007/978-3-540-70715-8_13 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026523454
    223 https://doi.org/10.1007/978-3-540-70715-8_13
    224 rdf:type schema:CreativeWork
    225 sg:pub.10.1007/s00138-011-0337-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021554950
    226 https://doi.org/10.1007/s00138-011-0337-9
    227 rdf:type schema:CreativeWork
    228 sg:pub.10.1007/s10994-009-5152-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029224515
    229 https://doi.org/10.1007/s10994-009-5152-4
    230 rdf:type schema:CreativeWork
    231 sg:pub.10.1038/nbt.3722 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014155836
    232 https://doi.org/10.1038/nbt.3722
    233 rdf:type schema:CreativeWork
    234 sg:pub.10.1038/nmeth.4344 schema:sameAs https://app.dimensions.ai/details/publication/pub.1090281650
    235 https://doi.org/10.1038/nmeth.4344
    236 rdf:type schema:CreativeWork
    237 sg:pub.10.1038/s41598-017-07599-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091142206
    238 https://doi.org/10.1038/s41598-017-07599-6
    239 rdf:type schema:CreativeWork
    240 sg:pub.10.1186/1471-2105-14-297 schema:sameAs https://app.dimensions.ai/details/publication/pub.1031327902
    241 https://doi.org/10.1186/1471-2105-14-297
    242 rdf:type schema:CreativeWork
    243 sg:pub.10.1186/gb-2006-7-10-r100 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040889351
    244 https://doi.org/10.1186/gb-2006-7-10-r100
    245 rdf:type schema:CreativeWork
    246 grid-institutes:grid.502801.e schema:alternateName Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
    247 schema:name Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
    248 rdf:type schema:Organization
    249 grid-institutes:grid.9668.1 schema:alternateName Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
    250 schema:name Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
    251 Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
    252 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...