Bias in error estimation when using cross-validation for model selection View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2006-12

AUTHORS

Sudhir Varma, Richard Simon

ABSTRACT

BACKGROUND: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. RESULTS: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. CONCLUSION: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error. More... »

PAGES

91

Identifiers

URI

http://scigraph.springernature.com/pub.10.1186/1471-2105-7-91

DOI

http://dx.doi.org/10.1186/1471-2105-7-91

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1034791610

PUBMED

https://www.ncbi.nlm.nih.gov/pubmed/16504092


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0104", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Statistics", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/01", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Mathematical Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Algorithms", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Artificial Intelligence", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Bias", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Computer Simulation", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Data Interpretation, Statistical", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Gene Expression Profiling", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Models, Genetic", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Models, Statistical", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Oligonucleotide Array Sequence Analysis", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Pattern Recognition, Automated", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Reproducibility of Results", 
        "type": "DefinedTerm"
      }, 
      {
        "inDefinedTermSet": "https://www.nlm.nih.gov/mesh/", 
        "name": "Sensitivity and Specificity", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "National Cancer Institute", 
          "id": "https://www.grid.ac/institutes/grid.48336.3a", 
          "name": [
            "Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Varma", 
        "givenName": "Sudhir", 
        "id": "sg:person.013700731017.89", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013700731017.89"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "National Cancer Institute", 
          "id": "https://www.grid.ac/institutes/grid.48336.3a", 
          "name": [
            "Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Simon", 
        "givenName": "Richard", 
        "id": "sg:person.01144427036.34", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144427036.34"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/s0014-5793(03)01275-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000158907"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/bioinformatics/bti499", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1013038565"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/bioinformatics/bti294", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1019469452"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/jnci/95.1.14", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1023174537"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0140-6736(03)12775-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1033430834"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1073/pnas.102102699", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1034359388"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1073/pnas.082099299", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037994416"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2006-12", 
    "datePublishedReg": "2006-12-01", 
    "description": "BACKGROUND: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data.\nRESULTS: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these \"null\" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With \"null\" and \"non null\" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the \"null\" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of \"null\" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for \"null\" and \"non-null\" data distributions.\nCONCLUSION: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1186/1471-2105-7-91", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": [
      {
        "id": "sg:journal.1023786", 
        "issn": [
          "1471-2105"
        ], 
        "name": "BMC Bioinformatics", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "1", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "7"
      }
    ], 
    "name": "Bias in error estimation when using cross-validation for model selection", 
    "pagination": "91", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "a5bbf7afed3f8dd5d498f6c15a76ec9e7c5da91381712b69880a253b2db0a643"
        ]
      }, 
      {
        "name": "pubmed_id", 
        "type": "PropertyValue", 
        "value": [
          "16504092"
        ]
      }, 
      {
        "name": "nlm_unique_id", 
        "type": "PropertyValue", 
        "value": [
          "100965194"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1186/1471-2105-7-91"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1034791610"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1186/1471-2105-7-91", 
      "https://app.dimensions.ai/details/publication/pub.1034791610"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-11T01:05", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8697_00000506.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.1186%2F1471-2105-7-91"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/1471-2105-7-91'


 

This table displays all metadata directly associated to this object as RDF triples.

144 TRIPLES      21 PREDICATES      48 URIs      33 LITERALS      21 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1186/1471-2105-7-91 schema:about N1ddf3691b7ad4ad589a6e334acdbaff6
2 N2593970e792f4f649d17926714ceb9d9
3 N2f39435eb68c431baf046835c2909547
4 N30ecf051272b49a0b76ececdc382d327
5 N4d133d2bfc694387a16a73eb222d0dff
6 N5b0ce279a7214fa4be45842b550c3531
7 N6c6d8f80d4154c2ea0bcb07b1ac03eb1
8 N89af05830c4449debc0fb49b60fadb4e
9 N9ebfe8adfa17405ca0f2256ba4c9aa20
10 Na4f0485ebe684eef87b3dfa2cfc60253
11 Nb7ddab0039c1486381bb3a7b569ed089
12 Nddb423258c3a41399f23c5a60ce463f6
13 anzsrc-for:01
14 anzsrc-for:0104
15 schema:author N349a8c5e754f4854b065d97da73f2f0e
16 schema:citation https://doi.org/10.1016/s0014-5793(03)01275-4
17 https://doi.org/10.1016/s0140-6736(03)12775-4
18 https://doi.org/10.1073/pnas.082099299
19 https://doi.org/10.1073/pnas.102102699
20 https://doi.org/10.1093/bioinformatics/bti294
21 https://doi.org/10.1093/bioinformatics/bti499
22 https://doi.org/10.1093/jnci/95.1.14
23 schema:datePublished 2006-12
24 schema:datePublishedReg 2006-12-01
25 schema:description BACKGROUND: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. RESULTS: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. CONCLUSION: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
26 schema:genre research_article
27 schema:inLanguage en
28 schema:isAccessibleForFree true
29 schema:isPartOf N20c25794c269434fada9cecdb64de1fd
30 N3eee7a2eeb0a43a5b5fc330e5515fa38
31 sg:journal.1023786
32 schema:name Bias in error estimation when using cross-validation for model selection
33 schema:pagination 91
34 schema:productId N0ecd3d83c49f44f6824377438af3fe40
35 N64dbe58477494e36bfc620a83f118a1f
36 N6aa3c27014e64f0bb2ef23bcff26b1dd
37 N6b663bdb1c4045ab8d1caf3ea737eef0
38 N7adc32d5b8fa48e18beeeaa09bbf70e9
39 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034791610
40 https://doi.org/10.1186/1471-2105-7-91
41 schema:sdDatePublished 2019-04-11T01:05
42 schema:sdLicense https://scigraph.springernature.com/explorer/license/
43 schema:sdPublisher N60f499cc489a4f6f864815a86a27d3ed
44 schema:url http://link.springer.com/10.1186%2F1471-2105-7-91
45 sgo:license sg:explorer/license/
46 sgo:sdDataset articles
47 rdf:type schema:ScholarlyArticle
48 N0ecd3d83c49f44f6824377438af3fe40 schema:name dimensions_id
49 schema:value pub.1034791610
50 rdf:type schema:PropertyValue
51 N1ddf3691b7ad4ad589a6e334acdbaff6 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
52 schema:name Reproducibility of Results
53 rdf:type schema:DefinedTerm
54 N209e5ee5902b416089c90998fa68f0e1 rdf:first sg:person.01144427036.34
55 rdf:rest rdf:nil
56 N20c25794c269434fada9cecdb64de1fd schema:issueNumber 1
57 rdf:type schema:PublicationIssue
58 N2593970e792f4f649d17926714ceb9d9 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
59 schema:name Data Interpretation, Statistical
60 rdf:type schema:DefinedTerm
61 N2f39435eb68c431baf046835c2909547 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
62 schema:name Algorithms
63 rdf:type schema:DefinedTerm
64 N30ecf051272b49a0b76ececdc382d327 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
65 schema:name Pattern Recognition, Automated
66 rdf:type schema:DefinedTerm
67 N349a8c5e754f4854b065d97da73f2f0e rdf:first sg:person.013700731017.89
68 rdf:rest N209e5ee5902b416089c90998fa68f0e1
69 N3eee7a2eeb0a43a5b5fc330e5515fa38 schema:volumeNumber 7
70 rdf:type schema:PublicationVolume
71 N4d133d2bfc694387a16a73eb222d0dff schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
72 schema:name Artificial Intelligence
73 rdf:type schema:DefinedTerm
74 N5b0ce279a7214fa4be45842b550c3531 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
75 schema:name Gene Expression Profiling
76 rdf:type schema:DefinedTerm
77 N60f499cc489a4f6f864815a86a27d3ed schema:name Springer Nature - SN SciGraph project
78 rdf:type schema:Organization
79 N64dbe58477494e36bfc620a83f118a1f schema:name pubmed_id
80 schema:value 16504092
81 rdf:type schema:PropertyValue
82 N6aa3c27014e64f0bb2ef23bcff26b1dd schema:name readcube_id
83 schema:value a5bbf7afed3f8dd5d498f6c15a76ec9e7c5da91381712b69880a253b2db0a643
84 rdf:type schema:PropertyValue
85 N6b663bdb1c4045ab8d1caf3ea737eef0 schema:name doi
86 schema:value 10.1186/1471-2105-7-91
87 rdf:type schema:PropertyValue
88 N6c6d8f80d4154c2ea0bcb07b1ac03eb1 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
89 schema:name Bias
90 rdf:type schema:DefinedTerm
91 N7adc32d5b8fa48e18beeeaa09bbf70e9 schema:name nlm_unique_id
92 schema:value 100965194
93 rdf:type schema:PropertyValue
94 N89af05830c4449debc0fb49b60fadb4e schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
95 schema:name Models, Genetic
96 rdf:type schema:DefinedTerm
97 N9ebfe8adfa17405ca0f2256ba4c9aa20 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
98 schema:name Models, Statistical
99 rdf:type schema:DefinedTerm
100 Na4f0485ebe684eef87b3dfa2cfc60253 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
101 schema:name Sensitivity and Specificity
102 rdf:type schema:DefinedTerm
103 Nb7ddab0039c1486381bb3a7b569ed089 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
104 schema:name Computer Simulation
105 rdf:type schema:DefinedTerm
106 Nddb423258c3a41399f23c5a60ce463f6 schema:inDefinedTermSet https://www.nlm.nih.gov/mesh/
107 schema:name Oligonucleotide Array Sequence Analysis
108 rdf:type schema:DefinedTerm
109 anzsrc-for:01 schema:inDefinedTermSet anzsrc-for:
110 schema:name Mathematical Sciences
111 rdf:type schema:DefinedTerm
112 anzsrc-for:0104 schema:inDefinedTermSet anzsrc-for:
113 schema:name Statistics
114 rdf:type schema:DefinedTerm
115 sg:journal.1023786 schema:issn 1471-2105
116 schema:name BMC Bioinformatics
117 rdf:type schema:Periodical
118 sg:person.01144427036.34 schema:affiliation https://www.grid.ac/institutes/grid.48336.3a
119 schema:familyName Simon
120 schema:givenName Richard
121 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144427036.34
122 rdf:type schema:Person
123 sg:person.013700731017.89 schema:affiliation https://www.grid.ac/institutes/grid.48336.3a
124 schema:familyName Varma
125 schema:givenName Sudhir
126 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013700731017.89
127 rdf:type schema:Person
128 https://doi.org/10.1016/s0014-5793(03)01275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000158907
129 rdf:type schema:CreativeWork
130 https://doi.org/10.1016/s0140-6736(03)12775-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033430834
131 rdf:type schema:CreativeWork
132 https://doi.org/10.1073/pnas.082099299 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037994416
133 rdf:type schema:CreativeWork
134 https://doi.org/10.1073/pnas.102102699 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034359388
135 rdf:type schema:CreativeWork
136 https://doi.org/10.1093/bioinformatics/bti294 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019469452
137 rdf:type schema:CreativeWork
138 https://doi.org/10.1093/bioinformatics/bti499 schema:sameAs https://app.dimensions.ai/details/publication/pub.1013038565
139 rdf:type schema:CreativeWork
140 https://doi.org/10.1093/jnci/95.1.14 schema:sameAs https://app.dimensions.ai/details/publication/pub.1023174537
141 rdf:type schema:CreativeWork
142 https://www.grid.ac/institutes/grid.48336.3a schema:alternateName National Cancer Institute
143 schema:name Biometric Research Branch, National Cancer Institute, Bethesda, MD, USA
144 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...