Three dilemmas in the integrated assessment of climate change View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

1996-11

AUTHORS

Edward A. Parson

ABSTRACT

These three dilemmas embody the hardest, most important, and most enduring problems of doing assessment well. None admits simple, obvious solutions. Each can be managed better or worse for any particular assessment endeavor, but doing better requires clear understanding of the purpose of the endeavor. What ways of combining different pieces of disciplinary knowledge, of making projections, and of pursuing policy relevance are more or less appropriate will differ, depending on whether a project seeks to characterize uncertainties and gaps in knowledge; to advise a particular policy choice; to support dialog among policy actors; or to facilitate inquiry into relevant values or goals. Evaluation of the relative emphasis, the methods, and the process of an assessment can only be done relative to some such purpose. Of course, some pitfalls may be so serious as to thwart any purpose, as Risbey et al.'s discussion of the global modeling movement reminds us. The global models' most obvious pitfalls — inadequate treatment of uncertainty, neglect of economic adjustment, excessive confidence in predictions — have largely been seen and avoided by the current assessment community (though there may be more to be learned even here). But on the subtler questions of how assessment or modeling can contribute most usefully to policy, little progress has been made since the 1970s. Consequently, though assessment has advanced in many ways since then, IA remains at risk of suffering the same fate as the global models: a cycle of early enthusiasm, followed by a reaction of frustration and excessive, undeserved rejection. Current endeavors in IA have made substantial contributions to identifying and prioritizing knowledge needs, less to informing specific policy choice. Further progress cannot be guided by a single canonical view of what assessment should be and do, but will proceed incrementally down multiple paths. Several paths currently appear promising: analytic approaches to better represent multiple actors, diverse preferences, and multiple valued outcomes; better representation and application of uncertainty, including diverse expert opinion; novel methods to link assessment with policy communities; and broader participation in assessment teams and explicit focus on negotiating and elaborating pragmatic, viable critical standards. Risbey et al.'s call to develop institutions for critical reflection, mutual learning, and self-improvement will be crucial in developing and evaluating the progress made down these paths. Morgan and Dowlatabadi's checklist for desiderata of IA is a good starting point for a conversation about assessment standards, to which I would propose a few extensions and elaborations. First, there should be not just multiple assessments, but multiple assessment projects using diverse collections of methods and approaches. Second, assessment projects should explore novel methods for connecting their work with the policy community. Third, the approach should be iterative not just within each project, but across assessment projects and between them and the policy community. Fourth, assessors should not be embarrassed by, or seek to disguise, results that are merely illustrative, non-authoritative, and suggestive; these should be acknowledged as such, and the vigorous questioning and critique that will come, including partisan critique, accepted. Do not seek to avoid criticism by mumbling. An important limit to this checklist approach is suggested, though, by the way various writers have groped to define assessment standards by analogy to other domains, revealing how limited is our understanding of how to evaluate assessment. Risbey et al. refer to ‘connoisseurship’, as if assessment is like fine wine; Clark and Majone (1985) refer to artistic criticism, as if assessment is like opera singing. If these analogies are appropriate, then pursuing a single set of critical standards for assessment is at least premature, possibly erroneous. Rather, there should be a diversity of approaches, perhaps so broad that no single set of criteria for excellence could be defined. The pragmatic middle way between the too-limiting application of a single set of standards, and an anarchic refusal to evaluate, will have to be negotiated, defined, and improved incrementally. More... »

PAGES

315-326

References to SciGraph publications

  • 1996-11. Assessing integrated assessments in CLIMATIC CHANGE
  • 1996-11. Learning from integrated assessment of climate change in CLIMATIC CHANGE
  • Journal

    TITLE

    Climatic Change

    ISSUE

    3-4

    VOLUME

    34

    Author Affiliations

    Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/bf00139295

    DOI

    http://dx.doi.org/10.1007/bf00139295

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1011635841


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology and Cognitive Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Harvard University", 
              "id": "https://www.grid.ac/institutes/grid.38142.3c", 
              "name": [
                "John F. Kennedy School of Government, Harvard University, 79 JFK Street, 02138, Cambridge, MA, U.S.A."
              ], 
              "type": "Organization"
            }, 
            "familyName": "Parson", 
            "givenName": "Edward A.", 
            "id": "sg:person.01275407720.89", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01275407720.89"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/bf00139298", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020534978", 
              "https://doi.org/10.1007/bf00139298"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00139298", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020534978", 
              "https://doi.org/10.1007/bf00139298"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00139297", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1027697354", 
              "https://doi.org/10.1007/bf00139297"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00139297", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1027697354", 
              "https://doi.org/10.1007/bf00139297"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1177/016224398501000302", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038502076"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1177/016224398501000302", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038502076"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "1996-11", 
        "datePublishedReg": "1996-11-01", 
        "description": "These three dilemmas embody the hardest, most important, and most enduring problems of doing assessment well. None admits simple, obvious solutions. Each can be managed better or worse for any particular assessment endeavor, but doing better requires clear understanding of the purpose of the endeavor. What ways of combining different pieces of disciplinary knowledge, of making projections, and of pursuing policy relevance are more or less appropriate will differ, depending on whether a project seeks to characterize uncertainties and gaps in knowledge; to advise a particular policy choice; to support dialog among policy actors; or to facilitate inquiry into relevant values or goals. Evaluation of the relative emphasis, the methods, and the process of an assessment can only be done relative to some such purpose. Of course, some pitfalls may be so serious as to thwart any purpose, as Risbey et al.'s discussion of the global modeling movement reminds us. The global models' most obvious pitfalls \u2014 inadequate treatment of uncertainty, neglect of economic adjustment, excessive confidence in predictions \u2014 have largely been seen and avoided by the current assessment community (though there may be more to be learned even here). But on the subtler questions of how assessment or modeling can contribute most usefully to policy, little progress has been made since the 1970s. Consequently, though assessment has advanced in many ways since then, IA remains at risk of suffering the same fate as the global models: a cycle of early enthusiasm, followed by a reaction of frustration and excessive, undeserved rejection. Current endeavors in IA have made substantial contributions to identifying and prioritizing knowledge needs, less to informing specific policy choice. Further progress cannot be guided by a single canonical view of what assessment should be and do, but will proceed incrementally down multiple paths. Several paths currently appear promising: analytic approaches to better represent multiple actors, diverse preferences, and multiple valued outcomes; better representation and application of uncertainty, including diverse expert opinion; novel methods to link assessment with policy communities; and broader participation in assessment teams and explicit focus on negotiating and elaborating pragmatic, viable critical standards. Risbey et al.'s call to develop institutions for critical reflection, mutual learning, and self-improvement will be crucial in developing and evaluating the progress made down these paths. Morgan and Dowlatabadi's checklist for desiderata of IA is a good starting point for a conversation about assessment standards, to which I would propose a few extensions and elaborations. First, there should be not just multiple assessments, but multiple assessment projects using diverse collections of methods and approaches. Second, assessment projects should explore novel methods for connecting their work with the policy community. Third, the approach should be iterative not just within each project, but across assessment projects and between them and the policy community. Fourth, assessors should not be embarrassed by, or seek to disguise, results that are merely illustrative, non-authoritative, and suggestive; these should be acknowledged as such, and the vigorous questioning and critique that will come, including partisan critique, accepted. Do not seek to avoid criticism by mumbling. An important limit to this checklist approach is suggested, though, by the way various writers have groped to define assessment standards by analogy to other domains, revealing how limited is our understanding of how to evaluate assessment. Risbey et al. refer to \u2018connoisseurship\u2019, as if assessment is like fine wine; Clark and Majone (1985) refer to artistic criticism, as if assessment is like opera singing. If these analogies are appropriate, then pursuing a single set of critical standards for assessment is at least premature, possibly erroneous. Rather, there should be a diversity of approaches, perhaps so broad that no single set of criteria for excellence could be defined. The pragmatic middle way between the too-limiting application of a single set of standards, and an anarchic refusal to evaluate, will have to be negotiated, defined, and improved incrementally.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1007/bf00139295", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1028211", 
            "issn": [
              "0165-0009", 
              "1573-1480"
            ], 
            "name": "Climatic Change", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "3-4", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "34"
          }
        ], 
        "name": "Three dilemmas in the integrated assessment of climate change", 
        "pagination": "315-326", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "37703e7cd462df5ae1da0d2317febf04b412b0c50ddb93b60898e33fe9eb4f84"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/bf00139295"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1011635841"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/bf00139295", 
          "https://app.dimensions.ai/details/publication/pub.1011635841"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-11T13:59", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000371_0000000371/records_130826_00000001.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "http://link.springer.com/10.1007/BF00139295"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/bf00139295'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/bf00139295'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/bf00139295'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/bf00139295'


     

    This table displays all metadata directly associated to this object as RDF triples.

    72 TRIPLES      21 PREDICATES      30 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/bf00139295 schema:about anzsrc-for:17
    2 anzsrc-for:1701
    3 schema:author Nde2e253e770340b6a7aa4a84b104cccb
    4 schema:citation sg:pub.10.1007/bf00139297
    5 sg:pub.10.1007/bf00139298
    6 https://doi.org/10.1177/016224398501000302
    7 schema:datePublished 1996-11
    8 schema:datePublishedReg 1996-11-01
    9 schema:description These three dilemmas embody the hardest, most important, and most enduring problems of doing assessment well. None admits simple, obvious solutions. Each can be managed better or worse for any particular assessment endeavor, but doing better requires clear understanding of the purpose of the endeavor. What ways of combining different pieces of disciplinary knowledge, of making projections, and of pursuing policy relevance are more or less appropriate will differ, depending on whether a project seeks to characterize uncertainties and gaps in knowledge; to advise a particular policy choice; to support dialog among policy actors; or to facilitate inquiry into relevant values or goals. Evaluation of the relative emphasis, the methods, and the process of an assessment can only be done relative to some such purpose. Of course, some pitfalls may be so serious as to thwart any purpose, as Risbey et al.'s discussion of the global modeling movement reminds us. The global models' most obvious pitfalls — inadequate treatment of uncertainty, neglect of economic adjustment, excessive confidence in predictions — have largely been seen and avoided by the current assessment community (though there may be more to be learned even here). But on the subtler questions of how assessment or modeling can contribute most usefully to policy, little progress has been made since the 1970s. Consequently, though assessment has advanced in many ways since then, IA remains at risk of suffering the same fate as the global models: a cycle of early enthusiasm, followed by a reaction of frustration and excessive, undeserved rejection. Current endeavors in IA have made substantial contributions to identifying and prioritizing knowledge needs, less to informing specific policy choice. Further progress cannot be guided by a single canonical view of what assessment should be and do, but will proceed incrementally down multiple paths. Several paths currently appear promising: analytic approaches to better represent multiple actors, diverse preferences, and multiple valued outcomes; better representation and application of uncertainty, including diverse expert opinion; novel methods to link assessment with policy communities; and broader participation in assessment teams and explicit focus on negotiating and elaborating pragmatic, viable critical standards. Risbey et al.'s call to develop institutions for critical reflection, mutual learning, and self-improvement will be crucial in developing and evaluating the progress made down these paths. Morgan and Dowlatabadi's checklist for desiderata of IA is a good starting point for a conversation about assessment standards, to which I would propose a few extensions and elaborations. First, there should be not just multiple assessments, but multiple assessment projects using diverse collections of methods and approaches. Second, assessment projects should explore novel methods for connecting their work with the policy community. Third, the approach should be iterative not just within each project, but across assessment projects and between them and the policy community. Fourth, assessors should not be embarrassed by, or seek to disguise, results that are merely illustrative, non-authoritative, and suggestive; these should be acknowledged as such, and the vigorous questioning and critique that will come, including partisan critique, accepted. Do not seek to avoid criticism by mumbling. An important limit to this checklist approach is suggested, though, by the way various writers have groped to define assessment standards by analogy to other domains, revealing how limited is our understanding of how to evaluate assessment. Risbey et al. refer to ‘connoisseurship’, as if assessment is like fine wine; Clark and Majone (1985) refer to artistic criticism, as if assessment is like opera singing. If these analogies are appropriate, then pursuing a single set of critical standards for assessment is at least premature, possibly erroneous. Rather, there should be a diversity of approaches, perhaps so broad that no single set of criteria for excellence could be defined. The pragmatic middle way between the too-limiting application of a single set of standards, and an anarchic refusal to evaluate, will have to be negotiated, defined, and improved incrementally.
    10 schema:genre research_article
    11 schema:inLanguage en
    12 schema:isAccessibleForFree false
    13 schema:isPartOf N625229b14b4b4160bd9b192eab5341f9
    14 Nc3f8f3855f1847478b874d0857981bc6
    15 sg:journal.1028211
    16 schema:name Three dilemmas in the integrated assessment of climate change
    17 schema:pagination 315-326
    18 schema:productId N57d1a7fed67549188e8d1dcaac9fc897
    19 Nafde99d1a8b94195b118807a4b4dee56
    20 Nb46e78b0766d4577a0782428bf54e09d
    21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011635841
    22 https://doi.org/10.1007/bf00139295
    23 schema:sdDatePublished 2019-04-11T13:59
    24 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    25 schema:sdPublisher N97793dbc83a2484e91b14c19b89a61ca
    26 schema:url http://link.springer.com/10.1007/BF00139295
    27 sgo:license sg:explorer/license/
    28 sgo:sdDataset articles
    29 rdf:type schema:ScholarlyArticle
    30 N57d1a7fed67549188e8d1dcaac9fc897 schema:name readcube_id
    31 schema:value 37703e7cd462df5ae1da0d2317febf04b412b0c50ddb93b60898e33fe9eb4f84
    32 rdf:type schema:PropertyValue
    33 N625229b14b4b4160bd9b192eab5341f9 schema:volumeNumber 34
    34 rdf:type schema:PublicationVolume
    35 N97793dbc83a2484e91b14c19b89a61ca schema:name Springer Nature - SN SciGraph project
    36 rdf:type schema:Organization
    37 Nafde99d1a8b94195b118807a4b4dee56 schema:name doi
    38 schema:value 10.1007/bf00139295
    39 rdf:type schema:PropertyValue
    40 Nb46e78b0766d4577a0782428bf54e09d schema:name dimensions_id
    41 schema:value pub.1011635841
    42 rdf:type schema:PropertyValue
    43 Nc3f8f3855f1847478b874d0857981bc6 schema:issueNumber 3-4
    44 rdf:type schema:PublicationIssue
    45 Nde2e253e770340b6a7aa4a84b104cccb rdf:first sg:person.01275407720.89
    46 rdf:rest rdf:nil
    47 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
    48 schema:name Psychology and Cognitive Sciences
    49 rdf:type schema:DefinedTerm
    50 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
    51 schema:name Psychology
    52 rdf:type schema:DefinedTerm
    53 sg:journal.1028211 schema:issn 0165-0009
    54 1573-1480
    55 schema:name Climatic Change
    56 rdf:type schema:Periodical
    57 sg:person.01275407720.89 schema:affiliation https://www.grid.ac/institutes/grid.38142.3c
    58 schema:familyName Parson
    59 schema:givenName Edward A.
    60 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01275407720.89
    61 rdf:type schema:Person
    62 sg:pub.10.1007/bf00139297 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027697354
    63 https://doi.org/10.1007/bf00139297
    64 rdf:type schema:CreativeWork
    65 sg:pub.10.1007/bf00139298 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020534978
    66 https://doi.org/10.1007/bf00139298
    67 rdf:type schema:CreativeWork
    68 https://doi.org/10.1177/016224398501000302 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038502076
    69 rdf:type schema:CreativeWork
    70 https://www.grid.ac/institutes/grid.38142.3c schema:alternateName Harvard University
    71 schema:name John F. Kennedy School of Government, Harvard University, 79 JFK Street, 02138, Cambridge, MA, U.S.A.
    72 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...