Bias in error estimation when using cross-validation for model selection View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2006-12

AUTHORS

Sudhir Varma, Richard Simon

ABSTRACT

BACKGROUND: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. RESULTS: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. CONCLUSION: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error. More... »

PAGES

91

Identifiers

URI

http://scigraph.springernature.com/pub.10.1186/1471-2105-7-91

DOI

http://dx.doi.org/10.1186/1471-2105-7-91

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1034791610

PUBMED

https://www.ncbi.nlm.nih.gov/pubmed/16504092


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0104", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Statistics", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/01", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Mathematical Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Algorithms", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Artificial Intelligence", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Bias", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Computer Simulation", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Data Interpretation, Statistical", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Gene Expression Profiling", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Models, Genetic", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Models, Statistical", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Oligonucleotide Array Sequence Analysis", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Pattern Recognition, Automated", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Reproducibility of Results", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Sensitivity and Specificity", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "National Cancer Institute", 
          "id": "https://www.grid.ac/institutes/grid.48336.3a", 
          "name": [
            "Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Varma", 
        "givenName": "Sudhir", 
        "id": "sg:person.013700731017.89", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013700731017.89"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "National Cancer Institute", 
          "id": "https://www.grid.ac/institutes/grid.48336.3a", 
          "name": [
            "Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Simon", 
        "givenName": "Richard", 
        "id": "sg:person.01144427036.34", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144427036.34"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/s0014-5793(03)01275-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000158907"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/bioinformatics/bti499", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1013038565"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/bioinformatics/bti294", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1019469452"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/jnci/95.1.14", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1023174537"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0140-6736(03)12775-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1033430834"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1073/pnas.102102699", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1034359388"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1073/pnas.082099299", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037994416"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2006-12", 
    "datePublishedReg": "2006-12-01", 
    "description": "BACKGROUND: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data.\nRESULTS: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these \"null\" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With \"null\" and \"non null\" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the \"null\" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of \"null\" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for \"null\" and \"non-null\" data distributions.\nCONCLUSION: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1186/1471-2105-7-91", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": [
      {
        "id": "sg:journal.1023786", 
        "issn": [
          "1471-2105"
        ], 
        "name": "BMC Bioinformatics", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "1", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "7"
      }
    ], 
    "name": "Bias in error estimation when using cross-validation for model selection", 
    "pagination": "91", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "a5bbf7afed3f8dd5d498f6c15a76ec9e7c5da91381712b69880a253b2db0a643"
        ]
      }, 
      {
        "name": "pubmed_id", 
        "type": "PropertyValue", 
        "value": [
          "16504092"
        ]
      }, 
      {
        "name": "nlm_unique_id", 
        "type": "PropertyValue", 
        "value": [
          "100965194"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1186/1471-2105-7-91"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1034791610"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1186/1471-2105-7-91", 
      "https://app.dimensions.ai/details/publication/pub.1034791610"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-11T01:05", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8697_00000506.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.1186%2F1471-2105-7-91"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'


 

This table displays all metadata directly associated to this object as RDF triples.

144 TRIPLES      21 PREDICATES      48 URIs      33 LITERALS      21 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1186/1471-2105-7-91 schema:about N1a4f9224160c4b38ab65961244e4b3bd
2 N2c896a0a966240c188c3e455b159fc04
3 N3861ca888a5d4107aa463e75ad496eb8
4 N7491e579794f4233955f171cd5363bc1
5 N76e8bb4fbd5943449396273407420c7d
6 N7d69cffcb709423c838b06d6de2d33a6
7 N900608f1f8d34871bdaa6e3ff36ecaa6
8 Na79baa32c303426c907ec00460f27809
9 Nb32b274902d74727b8068dea18f7df54
10 Nbcdbd0f8ea5e431eb3a69be3a490b37b
11 Ncd8673093b7d4e218a0333fa6055ac8b
12 Nf18320a6cd504ddd978c039f82cdfa78
13 anzsrc-for:01
14 anzsrc-for:0104
15 schema:author N50002df6e2cb462886f97d91985cde79
16 schema:citation https://doi.org/10.1016/s0014-5793(03)01275-4
17 https://doi.org/10.1016/s0140-6736(03)12775-4
18 https://doi.org/10.1073/pnas.082099299
19 https://doi.org/10.1073/pnas.102102699
20 https://doi.org/10.1093/bioinformatics/bti294
21 https://doi.org/10.1093/bioinformatics/bti499
22 https://doi.org/10.1093/jnci/95.1.14
23 schema:datePublished 2006-12
24 schema:datePublishedReg 2006-12-01
25 schema:description BACKGROUND: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. RESULTS: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. CONCLUSION: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
26 schema:genre research_article
27 schema:inLanguage en
28 schema:isAccessibleForFree true
29 schema:isPartOf N9a34a8b287ac48d3a633304f0daf5cc8
30 Na299fa02f23a494b88f846a14ff22323
31 sg:journal.1023786
32 schema:name Bias in error estimation when using cross-validation for model selection
33 schema:pagination 91
34 schema:productId N710ad2c3752341af9fdb8c3f05195693
35 Nb047e950b5fe4a0d9bf33d97d00d56b8
36 Nc9953ba4d62643418d2d1f21cf677e4b
37 Nc9cb5858e4e74563a875a53dc0e4669a
38 Nd0784bbaedff4a98b9287ddf33d4ddf5
39 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034791610
40 https://doi.org/10.1186/1471-2105-7-91
41 schema:sdDatePublished 2019-04-11T01:05
42 schema:sdLicense https://scigraph.springernature.com/explorer/license/
43 schema:sdPublisher N210ad199e0694d6cb0752a227d00bdb1
44 schema:url http://link.springer.com/10.1186%2F1471-2105-7-91
45 sgo:license sg:explorer/license/
46 sgo:sdDataset articles
47 rdf:type schema:ScholarlyArticle
48 N1a4f9224160c4b38ab65961244e4b3bd schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
49 schema:name Gene Expression Profiling
50 rdf:type schema:DefinedTerm
51 N210ad199e0694d6cb0752a227d00bdb1 schema:name Springer Nature - SN SciGraph project
52 rdf:type schema:Organization
53 N2c896a0a966240c188c3e455b159fc04 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
54 schema:name Reproducibility of Results
55 rdf:type schema:DefinedTerm
56 N3861ca888a5d4107aa463e75ad496eb8 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
57 schema:name Data Interpretation, Statistical
58 rdf:type schema:DefinedTerm
59 N50002df6e2cb462886f97d91985cde79 rdf:first sg:person.013700731017.89
60 rdf:rest N5638aedf02064848b0204f31aefd2cad
61 N5638aedf02064848b0204f31aefd2cad rdf:first sg:person.01144427036.34
62 rdf:rest rdf:nil
63 N710ad2c3752341af9fdb8c3f05195693 schema:name dimensions_id
64 schema:value pub.1034791610
65 rdf:type schema:PropertyValue
66 N7491e579794f4233955f171cd5363bc1 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
67 schema:name Sensitivity and Specificity
68 rdf:type schema:DefinedTerm
69 N76e8bb4fbd5943449396273407420c7d schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
70 schema:name Algorithms
71 rdf:type schema:DefinedTerm
72 N7d69cffcb709423c838b06d6de2d33a6 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
73 schema:name Models, Statistical
74 rdf:type schema:DefinedTerm
75 N900608f1f8d34871bdaa6e3ff36ecaa6 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
76 schema:name Bias
77 rdf:type schema:DefinedTerm
78 N9a34a8b287ac48d3a633304f0daf5cc8 schema:volumeNumber 7
79 rdf:type schema:PublicationVolume
80 Na299fa02f23a494b88f846a14ff22323 schema:issueNumber 1
81 rdf:type schema:PublicationIssue
82 Na79baa32c303426c907ec00460f27809 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
83 schema:name Computer Simulation
84 rdf:type schema:DefinedTerm
85 Nb047e950b5fe4a0d9bf33d97d00d56b8 schema:name doi
86 schema:value 10.1186/1471-2105-7-91
87 rdf:type schema:PropertyValue
88 Nb32b274902d74727b8068dea18f7df54 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
89 schema:name Artificial Intelligence
90 rdf:type schema:DefinedTerm
91 Nbcdbd0f8ea5e431eb3a69be3a490b37b schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
92 schema:name Oligonucleotide Array Sequence Analysis
93 rdf:type schema:DefinedTerm
94 Nc9953ba4d62643418d2d1f21cf677e4b schema:name nlm_unique_id
95 schema:value 100965194
96 rdf:type schema:PropertyValue
97 Nc9cb5858e4e74563a875a53dc0e4669a schema:name pubmed_id
98 schema:value 16504092
99 rdf:type schema:PropertyValue
100 Ncd8673093b7d4e218a0333fa6055ac8b schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
101 schema:name Models, Genetic
102 rdf:type schema:DefinedTerm
103 Nd0784bbaedff4a98b9287ddf33d4ddf5 schema:name readcube_id
104 schema:value a5bbf7afed3f8dd5d498f6c15a76ec9e7c5da91381712b69880a253b2db0a643
105 rdf:type schema:PropertyValue
106 Nf18320a6cd504ddd978c039f82cdfa78 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
107 schema:name Pattern Recognition, Automated
108 rdf:type schema:DefinedTerm
109 anzsrc-for:01 schema:inDefinedTermSet anzsrc-for:
110 schema:name Mathematical Sciences
111 rdf:type schema:DefinedTerm
112 anzsrc-for:0104 schema:inDefinedTermSet anzsrc-for:
113 schema:name Statistics
114 rdf:type schema:DefinedTerm
115 sg:journal.1023786 schema:issn 1471-2105
116 schema:name BMC Bioinformatics
117 rdf:type schema:Periodical
118 sg:person.01144427036.34 schema:affiliation https://www.grid.ac/institutes/grid.48336.3a
119 schema:familyName Simon
120 schema:givenName Richard
121 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144427036.34
122 rdf:type schema:Person
123 sg:person.013700731017.89 schema:affiliation https://www.grid.ac/institutes/grid.48336.3a
124 schema:familyName Varma
125 schema:givenName Sudhir
126 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013700731017.89
127 rdf:type schema:Person
128 https://doi.org/10.1016/s0014-5793(03)01275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000158907
129 rdf:type schema:CreativeWork
130 https://doi.org/10.1016/s0140-6736(03)12775-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033430834
131 rdf:type schema:CreativeWork
132 https://doi.org/10.1073/pnas.082099299 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037994416
133 rdf:type schema:CreativeWork
134 https://doi.org/10.1073/pnas.102102699 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034359388
135 rdf:type schema:CreativeWork
136 https://doi.org/10.1093/bioinformatics/bti294 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019469452
137 rdf:type schema:CreativeWork
138 https://doi.org/10.1093/bioinformatics/bti499 schema:sameAs https://app.dimensions.ai/details/publication/pub.1013038565
139 rdf:type schema:CreativeWork
140 https://doi.org/10.1093/jnci/95.1.14 schema:sameAs https://app.dimensions.ai/details/publication/pub.1023174537
141 rdf:type schema:CreativeWork
142 https://www.grid.ac/institutes/grid.48336.3a schema:alternateName National Cancer Institute
143 schema:name Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA
144 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...