Visual analytics for collaborative human-machine confidence in human-centric active learning tasks View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-12

AUTHORS

Phil Legg, Jim Smith, Alexander Downing

ABSTRACT

Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors—a person and a machine—that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections. More... »

PAGES

5

References to SciGraph publications

  • 2018-03-19. VIAL: a unified process for visual interactive labeling in THE VISUAL COMPUTER
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8

    DOI

    http://dx.doi.org/10.1186/s13673-019-0167-8

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1112119178


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of the West of England", 
              "id": "https://www.grid.ac/institutes/grid.6518.a", 
              "name": [
                "Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Legg", 
            "givenName": "Phil", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of the West of England", 
              "id": "https://www.grid.ac/institutes/grid.6518.a", 
              "name": [
                "Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Smith", 
            "givenName": "Jim", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of the West of England", 
              "id": "https://www.grid.ac/institutes/grid.6518.a", 
              "name": [
                "Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Downing", 
            "givenName": "Alexander", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.1145/1068009.1068228", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005145224"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1117/12.2007316", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029635807"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/1297231.1297257", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1030105023"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/1458082.1458165", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1035499566"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/1964897.1964906", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038490350"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.imavis.2006.04.023", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039344381"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.artint.2007.09.009", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039977562"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.patcog.2009.03.007", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042747053"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/2858036.2858529", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045055595"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/5.949485", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061180379"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tsmca.2009.2025025", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061795515"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2013.207", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061814071"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2016.2598495", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061814767"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2016.2598828", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061814811"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.2200/s00429ed1v01y201207aim018", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1069288311"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.2200/s00429ed1v01y201207aim018", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1069288311"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.visinf.2017.01.006", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1084112523"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1111/cgf.13092", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1084211282"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2017.2744683", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091437749"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2017.2744718", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091437753"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tvcg.2017.2744938", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091437760"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/bdva.2015.7314299", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093194327"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/icpr.2016.7900034", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094046300"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/ths.2015.7446229", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094989914"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/vizsec.2015.7312772", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095736877"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.23915/distill.00010", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101456014"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-018-1500-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101624616", 
              "https://doi.org/10.1007/s00371-018-1500-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-018-1500-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101624616", 
              "https://doi.org/10.1007/s00371-018-1500-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-018-1500-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101624616", 
              "https://doi.org/10.1007/s00371-018-1500-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1609/aimag.v35i4.2513", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1103067323"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/fg.2018.00033", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1104459021"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1145/3185524", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1104586961"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.21105/joss.00861", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1106529416"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-12", 
        "datePublishedReg": "2019-12-01", 
        "description": "Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human \u2018oracle\u2019 when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human \u2018oracle\u2019: humans are not all-knowing, untiring oracles. A human\u2019s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors\u2014a person and a machine\u2014that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1186/s13673-019-0167-8", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1136381", 
            "issn": [
              "2192-1962", 
              "2192-1962"
            ], 
            "name": "Human-centric Computing and Information Sciences", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "9"
          }
        ], 
        "name": "Visual analytics for collaborative human-machine confidence in human-centric active learning tasks", 
        "pagination": "5", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "8417c335e46addb4cb71ac1322c2021be29dd4fb24a22312d870f386cc5f6d75"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s13673-019-0167-8"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1112119178"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s13673-019-0167-8", 
          "https://app.dimensions.ai/details/publication/pub.1112119178"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-11T12:11", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000361_0000000361/records_53981_00000001.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://link.springer.com/10.1186%2Fs13673-019-0167-8"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s13673-019-0167-8'


     

    This table displays all metadata directly associated to this object as RDF triples.

    162 TRIPLES      21 PREDICATES      57 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s13673-019-0167-8 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nc4b6b6b19d374527ab7bd7909b5cb232
    4 schema:citation sg:pub.10.1007/s00371-018-1500-3
    5 https://doi.org/10.1016/j.artint.2007.09.009
    6 https://doi.org/10.1016/j.imavis.2006.04.023
    7 https://doi.org/10.1016/j.patcog.2009.03.007
    8 https://doi.org/10.1016/j.visinf.2017.01.006
    9 https://doi.org/10.1109/5.949485
    10 https://doi.org/10.1109/bdva.2015.7314299
    11 https://doi.org/10.1109/fg.2018.00033
    12 https://doi.org/10.1109/icpr.2016.7900034
    13 https://doi.org/10.1109/ths.2015.7446229
    14 https://doi.org/10.1109/tsmca.2009.2025025
    15 https://doi.org/10.1109/tvcg.2013.207
    16 https://doi.org/10.1109/tvcg.2016.2598495
    17 https://doi.org/10.1109/tvcg.2016.2598828
    18 https://doi.org/10.1109/tvcg.2017.2744683
    19 https://doi.org/10.1109/tvcg.2017.2744718
    20 https://doi.org/10.1109/tvcg.2017.2744938
    21 https://doi.org/10.1109/vizsec.2015.7312772
    22 https://doi.org/10.1111/cgf.13092
    23 https://doi.org/10.1117/12.2007316
    24 https://doi.org/10.1145/1068009.1068228
    25 https://doi.org/10.1145/1297231.1297257
    26 https://doi.org/10.1145/1458082.1458165
    27 https://doi.org/10.1145/1964897.1964906
    28 https://doi.org/10.1145/2858036.2858529
    29 https://doi.org/10.1145/3185524
    30 https://doi.org/10.1609/aimag.v35i4.2513
    31 https://doi.org/10.21105/joss.00861
    32 https://doi.org/10.2200/s00429ed1v01y201207aim018
    33 https://doi.org/10.23915/distill.00010
    34 schema:datePublished 2019-12
    35 schema:datePublishedReg 2019-12-01
    36 schema:description Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors—a person and a machine—that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.
    37 schema:genre research_article
    38 schema:inLanguage en
    39 schema:isAccessibleForFree false
    40 schema:isPartOf N594755df621f4aeea0cf87ef56f6f7ae
    41 Nf4ef35e7804b4a078a8f447e5bd90708
    42 sg:journal.1136381
    43 schema:name Visual analytics for collaborative human-machine confidence in human-centric active learning tasks
    44 schema:pagination 5
    45 schema:productId N5a29d670d92f4e2fb7c0f75b5aee2a3a
    46 Nc39da9bf11714592a64639087322e6d3
    47 Nef44986b2b474f70a396910c9c2e6d40
    48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112119178
    49 https://doi.org/10.1186/s13673-019-0167-8
    50 schema:sdDatePublished 2019-04-11T12:11
    51 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    52 schema:sdPublisher N2cb4342825e74b48b7ef22ea71778c33
    53 schema:url https://link.springer.com/10.1186%2Fs13673-019-0167-8
    54 sgo:license sg:explorer/license/
    55 sgo:sdDataset articles
    56 rdf:type schema:ScholarlyArticle
    57 N2cb4342825e74b48b7ef22ea71778c33 schema:name Springer Nature - SN SciGraph project
    58 rdf:type schema:Organization
    59 N35e9ae1ada0e455f8f3a31f25bb188a4 schema:affiliation https://www.grid.ac/institutes/grid.6518.a
    60 schema:familyName Downing
    61 schema:givenName Alexander
    62 rdf:type schema:Person
    63 N594755df621f4aeea0cf87ef56f6f7ae schema:issueNumber 1
    64 rdf:type schema:PublicationIssue
    65 N5a29d670d92f4e2fb7c0f75b5aee2a3a schema:name doi
    66 schema:value 10.1186/s13673-019-0167-8
    67 rdf:type schema:PropertyValue
    68 Na39246542f8f4d1a866170e5303b5189 rdf:first Nf201af586e334c33951137b1f1786ec4
    69 rdf:rest Nb6c3d01807e94943aea32919b88c06d8
    70 Nb6c3d01807e94943aea32919b88c06d8 rdf:first N35e9ae1ada0e455f8f3a31f25bb188a4
    71 rdf:rest rdf:nil
    72 Nc39da9bf11714592a64639087322e6d3 schema:name dimensions_id
    73 schema:value pub.1112119178
    74 rdf:type schema:PropertyValue
    75 Nc4b6b6b19d374527ab7bd7909b5cb232 rdf:first Ndee8ed7f3d144a3a9760d3e4cb540b4a
    76 rdf:rest Na39246542f8f4d1a866170e5303b5189
    77 Ndee8ed7f3d144a3a9760d3e4cb540b4a schema:affiliation https://www.grid.ac/institutes/grid.6518.a
    78 schema:familyName Legg
    79 schema:givenName Phil
    80 rdf:type schema:Person
    81 Nef44986b2b474f70a396910c9c2e6d40 schema:name readcube_id
    82 schema:value 8417c335e46addb4cb71ac1322c2021be29dd4fb24a22312d870f386cc5f6d75
    83 rdf:type schema:PropertyValue
    84 Nf201af586e334c33951137b1f1786ec4 schema:affiliation https://www.grid.ac/institutes/grid.6518.a
    85 schema:familyName Smith
    86 schema:givenName Jim
    87 rdf:type schema:Person
    88 Nf4ef35e7804b4a078a8f447e5bd90708 schema:volumeNumber 9
    89 rdf:type schema:PublicationVolume
    90 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    91 schema:name Information and Computing Sciences
    92 rdf:type schema:DefinedTerm
    93 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    94 schema:name Artificial Intelligence and Image Processing
    95 rdf:type schema:DefinedTerm
    96 sg:journal.1136381 schema:issn 2192-1962
    97 schema:name Human-centric Computing and Information Sciences
    98 rdf:type schema:Periodical
    99 sg:pub.10.1007/s00371-018-1500-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101624616
    100 https://doi.org/10.1007/s00371-018-1500-3
    101 rdf:type schema:CreativeWork
    102 https://doi.org/10.1016/j.artint.2007.09.009 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039977562
    103 rdf:type schema:CreativeWork
    104 https://doi.org/10.1016/j.imavis.2006.04.023 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039344381
    105 rdf:type schema:CreativeWork
    106 https://doi.org/10.1016/j.patcog.2009.03.007 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042747053
    107 rdf:type schema:CreativeWork
    108 https://doi.org/10.1016/j.visinf.2017.01.006 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084112523
    109 rdf:type schema:CreativeWork
    110 https://doi.org/10.1109/5.949485 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061180379
    111 rdf:type schema:CreativeWork
    112 https://doi.org/10.1109/bdva.2015.7314299 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093194327
    113 rdf:type schema:CreativeWork
    114 https://doi.org/10.1109/fg.2018.00033 schema:sameAs https://app.dimensions.ai/details/publication/pub.1104459021
    115 rdf:type schema:CreativeWork
    116 https://doi.org/10.1109/icpr.2016.7900034 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094046300
    117 rdf:type schema:CreativeWork
    118 https://doi.org/10.1109/ths.2015.7446229 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094989914
    119 rdf:type schema:CreativeWork
    120 https://doi.org/10.1109/tsmca.2009.2025025 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061795515
    121 rdf:type schema:CreativeWork
    122 https://doi.org/10.1109/tvcg.2013.207 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061814071
    123 rdf:type schema:CreativeWork
    124 https://doi.org/10.1109/tvcg.2016.2598495 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061814767
    125 rdf:type schema:CreativeWork
    126 https://doi.org/10.1109/tvcg.2016.2598828 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061814811
    127 rdf:type schema:CreativeWork
    128 https://doi.org/10.1109/tvcg.2017.2744683 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091437749
    129 rdf:type schema:CreativeWork
    130 https://doi.org/10.1109/tvcg.2017.2744718 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091437753
    131 rdf:type schema:CreativeWork
    132 https://doi.org/10.1109/tvcg.2017.2744938 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091437760
    133 rdf:type schema:CreativeWork
    134 https://doi.org/10.1109/vizsec.2015.7312772 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095736877
    135 rdf:type schema:CreativeWork
    136 https://doi.org/10.1111/cgf.13092 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084211282
    137 rdf:type schema:CreativeWork
    138 https://doi.org/10.1117/12.2007316 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029635807
    139 rdf:type schema:CreativeWork
    140 https://doi.org/10.1145/1068009.1068228 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005145224
    141 rdf:type schema:CreativeWork
    142 https://doi.org/10.1145/1297231.1297257 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030105023
    143 rdf:type schema:CreativeWork
    144 https://doi.org/10.1145/1458082.1458165 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035499566
    145 rdf:type schema:CreativeWork
    146 https://doi.org/10.1145/1964897.1964906 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038490350
    147 rdf:type schema:CreativeWork
    148 https://doi.org/10.1145/2858036.2858529 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045055595
    149 rdf:type schema:CreativeWork
    150 https://doi.org/10.1145/3185524 schema:sameAs https://app.dimensions.ai/details/publication/pub.1104586961
    151 rdf:type schema:CreativeWork
    152 https://doi.org/10.1609/aimag.v35i4.2513 schema:sameAs https://app.dimensions.ai/details/publication/pub.1103067323
    153 rdf:type schema:CreativeWork
    154 https://doi.org/10.21105/joss.00861 schema:sameAs https://app.dimensions.ai/details/publication/pub.1106529416
    155 rdf:type schema:CreativeWork
    156 https://doi.org/10.2200/s00429ed1v01y201207aim018 schema:sameAs https://app.dimensions.ai/details/publication/pub.1069288311
    157 rdf:type schema:CreativeWork
    158 https://doi.org/10.23915/distill.00010 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101456014
    159 rdf:type schema:CreativeWork
    160 https://www.grid.ac/institutes/grid.6518.a schema:alternateName University of the West of England
    161 schema:name Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK
    162 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...