Visual analytics for collaborative human-machine confidence in human-centric active learning tasks View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-12

AUTHORS

Phil Legg, Jim Smith, Alexander Downing

ABSTRACT

Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors—a person and a machine—that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections. More... »

PAGES

5

References to SciGraph publications

  • 2018-03-19. VIAL: a unified process for visual interactive labeling in THE VISUAL COMPUTER
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8

    DOI

    http://dx.doi.org/10.1186/s13673-019-0167-8

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1112119178


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of the West of England", 
              "id": "https://www.grid.ac/institutes/grid.6518.a", 
              "name": [
                "Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Legg", 
            "givenName": "Phil", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of the West of England", 
              "id": "https://www.grid.ac/institutes/grid.6518.a", 
              "name": [
                "Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Smith", 
            "givenName": "Jim", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of the West of England", 
              "id": "https://www.grid.ac/institutes/grid.6518.a", 
              "name": [
                "Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Downing", 
            "givenName": "Alexander", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.1145/1068009.1068228", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005145224"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1117/12.2007316", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029635807"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/1297231.1297257", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1030105023"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/1458082.1458165", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1035499566"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/1964897.1964906", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038490350"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.imavis.2006.04.023", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039344381"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.artint.2007.09.009", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039977562"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.patcog.2009.03.007", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042747053"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/2858036.2858529", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045055595"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/5.949485", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061180379"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tsmca.2009.2025025", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061795515"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2013.207", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061814071"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2016.2598495", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061814767"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2016.2598828", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061814811"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.2200/s00429ed1v01y201207aim018", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1069288311"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.2200/s00429ed1v01y201207aim018", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1069288311"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.visinf.2017.01.006", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1084112523"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1111/cgf.13092", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1084211282"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2017.2744683", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091437749"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2017.2744718", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091437753"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2017.2744938", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091437760"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/bdva.2015.7314299", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093194327"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/icpr.2016.7900034", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094046300"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/ths.2015.7446229", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094989914"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/vizsec.2015.7312772", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095736877"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.23915/distill.00010", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101456014"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-018-1500-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101624616", 
              "https://doi.org/10.1007/s00371-018-1500-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-018-1500-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101624616", 
              "https://doi.org/10.1007/s00371-018-1500-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-018-1500-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101624616", 
              "https://doi.org/10.1007/s00371-018-1500-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1609/aimag.v35i4.2513", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1103067323"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/fg.2018.00033", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1104459021"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/3185524", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1104586961"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.21105/joss.00861", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1106529416"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-12", 
        "datePublishedReg": "2019-12-01", 
        "description": "Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human \u2018oracle\u2019 when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human \u2018oracle\u2019: humans are not all-knowing, untiring oracles. A human\u2019s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors\u2014a person and a machine\u2014that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1186/s13673-019-0167-8", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1136381", 
            "issn": [
              "2192-1962", 
              "2192-1962"
            ], 
            "name": "Human-centric Computing and Information Sciences", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "9"
          }
        ], 
        "name": "Visual analytics for collaborative human-machine confidence in human-centric active learning tasks", 
        "pagination": "5", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "8417c335e46addb4cb71ac1322c2021be29dd4fb24a22312d870f386cc5f6d75"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s13673-019-0167-8"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1112119178"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s13673-019-0167-8", 
          "https://app.dimensions.ai/details/publication/pub.1112119178"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-11T12:11", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000361_0000000361/records_53981_00000001.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://link.springer.com/10.1186%2Fs13673-019-0167-8"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'


     

    This table displays all metadata directly associated to this object as RDF triples.

    162 TRIPLES      21 PREDICATES      57 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s13673-019-0167-8 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nf0656c092a854a928fe18671498ac51c
    4 schema:citation sg:pub.10.1007/s00371-018-1500-3
    5 https://doi.org/10.1016/j.artint.2007.09.009
    6 https://doi.org/10.1016/j.imavis.2006.04.023
    7 https://doi.org/10.1016/j.patcog.2009.03.007
    8 https://doi.org/10.1016/j.visinf.2017.01.006
    9 https://doi.org/10.1109/5.949485
    10 https://doi.org/10.1109/bdva.2015.7314299
    11 https://doi.org/10.1109/fg.2018.00033
    12 https://doi.org/10.1109/icpr.2016.7900034
    13 https://doi.org/10.1109/ths.2015.7446229
    14 https://doi.org/10.1109/tsmca.2009.2025025
    15 https://doi.org/10.1109/tvcg.2013.207
    16 https://doi.org/10.1109/tvcg.2016.2598495
    17 https://doi.org/10.1109/tvcg.2016.2598828
    18 https://doi.org/10.1109/tvcg.2017.2744683
    19 https://doi.org/10.1109/tvcg.2017.2744718
    20 https://doi.org/10.1109/tvcg.2017.2744938
    21 https://doi.org/10.1109/vizsec.2015.7312772
    22 https://doi.org/10.1111/cgf.13092
    23 https://doi.org/10.1117/12.2007316
    24 https://doi.org/10.1145/1068009.1068228
    25 https://doi.org/10.1145/1297231.1297257
    26 https://doi.org/10.1145/1458082.1458165
    27 https://doi.org/10.1145/1964897.1964906
    28 https://doi.org/10.1145/2858036.2858529
    29 https://doi.org/10.1145/3185524
    30 https://doi.org/10.1609/aimag.v35i4.2513
    31 https://doi.org/10.21105/joss.00861
    32 https://doi.org/10.2200/s00429ed1v01y201207aim018
    33 https://doi.org/10.23915/distill.00010
    34 schema:datePublished 2019-12
    35 schema:datePublishedReg 2019-12-01
    36 schema:description Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors—a person and a machine—that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.
    37 schema:genre research_article
    38 schema:inLanguage en
    39 schema:isAccessibleForFree false
    40 schema:isPartOf N8ddc156a6d884d449d3ad6dd95fa9b58
    41 N9a326efcb15141daae4abc98120adc42
    42 sg:journal.1136381
    43 schema:name Visual analytics for collaborative human-machine confidence in human-centric active learning tasks
    44 schema:pagination 5
    45 schema:productId N11e9296680134ab0bd9d10b522b78410
    46 N47134316374f4aaea57833e3904ef710
    47 Nf2437388082f4dc8a9f85fc19d2b6193
    48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112119178
    49 https://doi.org/10.1186/s13673-019-0167-8
    50 schema:sdDatePublished 2019-04-11T12:11
    51 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    52 schema:sdPublisher N0ab92e79b75b4551b84751fa8ec19cab
    53 schema:url https://link.springer.com/10.1186%2Fs13673-019-0167-8
    54 sgo:license sg:explorer/license/
    55 sgo:sdDataset articles
    56 rdf:type schema:ScholarlyArticle
    57 N0ab92e79b75b4551b84751fa8ec19cab schema:name Springer Nature - SN SciGraph project
    58 rdf:type schema:Organization
    59 N11e9296680134ab0bd9d10b522b78410 schema:name doi
    60 schema:value 10.1186/s13673-019-0167-8
    61 rdf:type schema:PropertyValue
    62 N47134316374f4aaea57833e3904ef710 schema:name dimensions_id
    63 schema:value pub.1112119178
    64 rdf:type schema:PropertyValue
    65 N4f339d12756b4416b54a25842a565fbf schema:affiliation https://www.grid.ac/institutes/grid.6518.a
    66 schema:familyName Legg
    67 schema:givenName Phil
    68 rdf:type schema:Person
    69 N626edbcb1f57421a9d496f7f01fe3621 rdf:first Nc52e798dbc6248c5a1107b28fea11108
    70 rdf:rest N789abcfde55043cb823a79be8d4a7d7d
    71 N789abcfde55043cb823a79be8d4a7d7d rdf:first Nc4d7ae47b67247809fb81061fe53e826
    72 rdf:rest rdf:nil
    73 N8ddc156a6d884d449d3ad6dd95fa9b58 schema:volumeNumber 9
    74 rdf:type schema:PublicationVolume
    75 N9a326efcb15141daae4abc98120adc42 schema:issueNumber 1
    76 rdf:type schema:PublicationIssue
    77 Nc4d7ae47b67247809fb81061fe53e826 schema:affiliation https://www.grid.ac/institutes/grid.6518.a
    78 schema:familyName Downing
    79 schema:givenName Alexander
    80 rdf:type schema:Person
    81 Nc52e798dbc6248c5a1107b28fea11108 schema:affiliation https://www.grid.ac/institutes/grid.6518.a
    82 schema:familyName Smith
    83 schema:givenName Jim
    84 rdf:type schema:Person
    85 Nf0656c092a854a928fe18671498ac51c rdf:first N4f339d12756b4416b54a25842a565fbf
    86 rdf:rest N626edbcb1f57421a9d496f7f01fe3621
    87 Nf2437388082f4dc8a9f85fc19d2b6193 schema:name readcube_id
    88 schema:value 8417c335e46addb4cb71ac1322c2021be29dd4fb24a22312d870f386cc5f6d75
    89 rdf:type schema:PropertyValue
    90 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    91 schema:name Information and Computing Sciences
    92 rdf:type schema:DefinedTerm
    93 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    94 schema:name Artificial Intelligence and Image Processing
    95 rdf:type schema:DefinedTerm
    96 sg:journal.1136381 schema:issn 2192-1962
    97 schema:name Human-centric Computing and Information Sciences
    98 rdf:type schema:Periodical
    99 sg:pub.10.1007/s00371-018-1500-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101624616
    100 https://doi.org/10.1007/s00371-018-1500-3
    101 rdf:type schema:CreativeWork
    102 https://doi.org/10.1016/j.artint.2007.09.009 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039977562
    103 rdf:type schema:CreativeWork
    104 https://doi.org/10.1016/j.imavis.2006.04.023 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039344381
    105 rdf:type schema:CreativeWork
    106 https://doi.org/10.1016/j.patcog.2009.03.007 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042747053
    107 rdf:type schema:CreativeWork
    108 https://doi.org/10.1016/j.visinf.2017.01.006 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084112523
    109 rdf:type schema:CreativeWork
    110 https://doi.org/10.1109/5.949485 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061180379
    111 rdf:type schema:CreativeWork
    112 https://doi.org/10.1109/bdva.2015.7314299 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093194327
    113 rdf:type schema:CreativeWork
    114 https://doi.org/10.1109/fg.2018.00033 schema:sameAs https://app.dimensions.ai/details/publication/pub.1104459021
    115 rdf:type schema:CreativeWork
    116 https://doi.org/10.1109/icpr.2016.7900034 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094046300
    117 rdf:type schema:CreativeWork
    118 https://doi.org/10.1109/ths.2015.7446229 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094989914
    119 rdf:type schema:CreativeWork
    120 https://doi.org/10.1109/tsmca.2009.2025025 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061795515
    121 rdf:type schema:CreativeWork
    122 https://doi.org/10.1109/tvcg.2013.207 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061814071
    123 rdf:type schema:CreativeWork
    124 https://doi.org/10.1109/tvcg.2016.2598495 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061814767
    125 rdf:type schema:CreativeWork
    126 https://doi.org/10.1109/tvcg.2016.2598828 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061814811
    127 rdf:type schema:CreativeWork
    128 https://doi.org/10.1109/tvcg.2017.2744683 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091437749
    129 rdf:type schema:CreativeWork
    130 https://doi.org/10.1109/tvcg.2017.2744718 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091437753
    131 rdf:type schema:CreativeWork
    132 https://doi.org/10.1109/tvcg.2017.2744938 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091437760
    133 rdf:type schema:CreativeWork
    134 https://doi.org/10.1109/vizsec.2015.7312772 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095736877
    135 rdf:type schema:CreativeWork
    136 https://doi.org/10.1111/cgf.13092 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084211282
    137 rdf:type schema:CreativeWork
    138 https://doi.org/10.1117/12.2007316 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029635807
    139 rdf:type schema:CreativeWork
    140 https://doi.org/10.1145/1068009.1068228 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005145224
    141 rdf:type schema:CreativeWork
    142 https://doi.org/10.1145/1297231.1297257 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030105023
    143 rdf:type schema:CreativeWork
    144 https://doi.org/10.1145/1458082.1458165 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035499566
    145 rdf:type schema:CreativeWork
    146 https://doi.org/10.1145/1964897.1964906 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038490350
    147 rdf:type schema:CreativeWork
    148 https://doi.org/10.1145/2858036.2858529 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045055595
    149 rdf:type schema:CreativeWork
    150 https://doi.org/10.1145/3185524 schema:sameAs https://app.dimensions.ai/details/publication/pub.1104586961
    151 rdf:type schema:CreativeWork
    152 https://doi.org/10.1609/aimag.v35i4.2513 schema:sameAs https://app.dimensions.ai/details/publication/pub.1103067323
    153 rdf:type schema:CreativeWork
    154 https://doi.org/10.21105/joss.00861 schema:sameAs https://app.dimensions.ai/details/publication/pub.1106529416
    155 rdf:type schema:CreativeWork
    156 https://doi.org/10.2200/s00429ed1v01y201207aim018 schema:sameAs https://app.dimensions.ai/details/publication/pub.1069288311
    157 rdf:type schema:CreativeWork
    158 https://doi.org/10.23915/distill.00010 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101456014
    159 rdf:type schema:CreativeWork
    160 https://www.grid.ac/institutes/grid.6518.a schema:alternateName University of the West of England
    161 schema:name Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK
    162 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...