TWC: Small: Automatic Techniques for Evaluating and Hardening Machine Learning Classifiers in the Presence of Adversaries View Homepage


Ontology type: schema:MonetaryGrant     


Grant Info

YEARS

2016-2019

FUNDING AMOUNT

494884 USD

ABSTRACT

New security exploits emerge far faster than manual analysts can analyze them, driving growing interest in automated machine learning tools for computer security. Classifiers based on machine learning algorithms have shown promising results for many security tasks including malware classification and network intrusion detection, but classic machine learning algorithms are not designed to operate in the presence of adversaries. Intelligent and adaptive adversaries may actively manipulate the information they present in attempts to evade a trained classifier, leading to a competition between the designers of learning systems and attackers who wish to evade them. This project is developing automated techniques for predicting how well classifiers will resist the evasions of adversaries, along with general methods to automatically harden machine-learning classifiers against adversarial evasion attacks. At the junction between machine learning and computer security, this project involves two main tasks: (1) developing a framework that can automatically assess the robustness of a classifier by using evolutionary techniques to simulate an adversary's efforts to evade that classifier; and (2) improving the robustness of classifiers by developing generic machine learning architectures that employ randomized models and co-evolution to automatically harden machine-learning classifiers against adversaries. Our system aims to allow a classifier designer to understand how the classification performance of a model degrades under evasion attacks, enabling better-informed and more secure design choices. The framework is general and scalable, and takes advantage of the latest advances in machine learning and computer security. More... »

URL

http://www.nsf.gov/awardsearch/showAward?AWD_ID=1619098&HistoricalAwards=false

Related SciGraph Publications

  • 2017. Connecting Program Synthesis and Reachability: Automatic Program Repair Using Test-Input Generation in TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2208", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "amount": {
          "currency": "USD", 
          "type": "MonetaryAmount", 
          "value": "494884"
        }, 
        "description": "New security exploits emerge far faster than manual analysts can analyze them, driving growing interest in automated machine learning tools for computer security. Classifiers based on machine learning algorithms have shown promising results for many security tasks including malware classification and network intrusion detection, but classic machine learning algorithms are not designed to operate in the presence of adversaries. Intelligent and adaptive adversaries may actively manipulate the information they present in attempts to evade a trained classifier, leading to a competition between the designers of learning systems and attackers who wish to evade them. This project is developing automated techniques for predicting how well classifiers will resist the evasions of adversaries, along with general methods to automatically harden machine-learning classifiers against adversarial evasion attacks. At the junction between machine learning and computer security, this project involves two main tasks: (1) developing a framework that can automatically assess the robustness of a classifier by using evolutionary techniques to simulate an adversary's efforts to evade that classifier; and (2) improving the robustness of classifiers by developing generic machine learning architectures that employ randomized models and co-evolution to automatically harden machine-learning classifiers against adversaries. Our system aims to allow a classifier designer to understand how the classification performance of a model degrades under evasion attacks, enabling better-informed and more secure design choices. The framework is general and scalable, and takes advantage of the latest advances in machine learning and computer security.", 
        "endDate": "2019-08-31T00:00:00Z", 
        "funder": {
          "id": "https://www.grid.ac/institutes/grid.457785.c", 
          "type": "Organization"
        }, 
        "id": "sg:grant.5540973", 
        "identifier": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "5540973"
            ]
          }, 
          {
            "name": "nsf_id", 
            "type": "PropertyValue", 
            "value": [
              "1619098"
            ]
          }
        ], 
        "inLanguage": [
          "en"
        ], 
        "keywords": [
          "latest advances", 
          "framework", 
          "robustness", 
          "classifier designer", 
          "adversarial evasion attacks", 
          "machine learning", 
          "Small", 
          "attempt", 
          "evasion attacks", 
          "adaptive adversary", 
          "Automatic Techniques", 
          "system", 
          "main task", 
          "model", 
          "Intelligent", 
          "New security exploits", 
          "classic machine", 
          "scalable", 
          "technique", 
          "classifier", 
          "randomized model", 
          "junction", 
          "promising results", 
          "information", 
          "attacker", 
          "many security tasks", 
          "generic machine", 
          "computer security", 
          "evolutionary techniques", 
          "TWC", 
          "adversary", 
          "network intrusion detection", 
          "presence", 
          "project", 
          "Adversaries", 
          "general method", 
          "architecture", 
          "adversary\u2019s efforts", 
          "classification performance", 
          "secure design choices", 
          "manual analysts", 
          "algorithm", 
          "evasion", 
          "machine learning tools", 
          "malware classification", 
          "Evaluating", 
          "designers", 
          "Hardening Machine Learning Classifiers", 
          "competition", 
          "interest", 
          "advantages", 
          "machine"
        ], 
        "name": "TWC: Small: Automatic Techniques for Evaluating and Hardening Machine Learning Classifiers in the Presence of Adversaries", 
        "recipient": [
          {
            "id": "https://www.grid.ac/institutes/grid.27755.32", 
            "type": "Organization"
          }, 
          {
            "affiliation": {
              "id": "https://www.grid.ac/institutes/grid.27755.32", 
              "name": "University of Virginia Main Campus", 
              "type": "Organization"
            }, 
            "familyName": "Qi", 
            "givenName": "Yanjun", 
            "id": "sg:person.012364473531.84", 
            "type": "Person"
          }, 
          {
            "member": "sg:person.012364473531.84", 
            "roleName": "PI", 
            "type": "Role"
          }, 
          {
            "affiliation": {
              "id": "https://www.grid.ac/institutes/grid.27755.32", 
              "name": "University of Virginia Main Campus", 
              "type": "Organization"
            }, 
            "familyName": "Weimer", 
            "givenName": "Westley", 
            "id": "sg:person.014010553007.40", 
            "type": "Person"
          }, 
          {
            "member": "sg:person.014010553007.40", 
            "roleName": "Co-PI", 
            "type": "Role"
          }, 
          {
            "affiliation": {
              "id": "https://www.grid.ac/institutes/grid.27755.32", 
              "name": "University of Virginia Main Campus", 
              "type": "Organization"
            }, 
            "familyName": "Evans", 
            "givenName": "David", 
            "id": "sg:person.0674231734.30", 
            "type": "Person"
          }, 
          {
            "member": "sg:person.0674231734.30", 
            "roleName": "Co-PI", 
            "type": "Role"
          }
        ], 
        "sameAs": [
          "https://app.dimensions.ai/details/grant/grant.5540973"
        ], 
        "sdDataset": "grants", 
        "sdDatePublished": "2019-03-07T12:37", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com.uberresearch.data.processor/core_data/20181219_192338/projects/base/nsf_projects_8.xml.gz", 
        "startDate": "2016-09-01T00:00:00Z", 
        "type": "MonetaryGrant", 
        "url": "http://www.nsf.gov/awardsearch/showAward?AWD_ID=1619098&HistoricalAwards=false"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/grant.5540973'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/grant.5540973'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/grant.5540973'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/grant.5540973'


     

    This table displays all metadata directly associated to this object as RDF triples.

    114 TRIPLES      19 PREDICATES      78 URIs      68 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:grant.5540973 schema:about anzsrc-for:2208
    2 schema:amount Nac2bce2145e5432f8f0defd1bc6784e3
    3 schema:description New security exploits emerge far faster than manual analysts can analyze them, driving growing interest in automated machine learning tools for computer security. Classifiers based on machine learning algorithms have shown promising results for many security tasks including malware classification and network intrusion detection, but classic machine learning algorithms are not designed to operate in the presence of adversaries. Intelligent and adaptive adversaries may actively manipulate the information they present in attempts to evade a trained classifier, leading to a competition between the designers of learning systems and attackers who wish to evade them. This project is developing automated techniques for predicting how well classifiers will resist the evasions of adversaries, along with general methods to automatically harden machine-learning classifiers against adversarial evasion attacks. At the junction between machine learning and computer security, this project involves two main tasks: (1) developing a framework that can automatically assess the robustness of a classifier by using evolutionary techniques to simulate an adversary's efforts to evade that classifier; and (2) improving the robustness of classifiers by developing generic machine learning architectures that employ randomized models and co-evolution to automatically harden machine-learning classifiers against adversaries. Our system aims to allow a classifier designer to understand how the classification performance of a model degrades under evasion attacks, enabling better-informed and more secure design choices. The framework is general and scalable, and takes advantage of the latest advances in machine learning and computer security.
    4 schema:endDate 2019-08-31T00:00:00Z
    5 schema:funder https://www.grid.ac/institutes/grid.457785.c
    6 schema:identifier N4be8c3943c6b4523baa60673ad26363e
    7 N5dfbadbf470c4f71883909569a94592b
    8 schema:inLanguage en
    9 schema:keywords Adversaries
    10 Automatic Techniques
    11 Evaluating
    12 Hardening Machine Learning Classifiers
    13 Intelligent
    14 New security exploits
    15 Small
    16 TWC
    17 adaptive adversary
    18 advantages
    19 adversarial evasion attacks
    20 adversary
    21 adversary’s efforts
    22 algorithm
    23 architecture
    24 attacker
    25 attempt
    26 classic machine
    27 classification performance
    28 classifier
    29 classifier designer
    30 competition
    31 computer security
    32 designers
    33 evasion
    34 evasion attacks
    35 evolutionary techniques
    36 framework
    37 general method
    38 generic machine
    39 information
    40 interest
    41 junction
    42 latest advances
    43 machine
    44 machine learning
    45 machine learning tools
    46 main task
    47 malware classification
    48 manual analysts
    49 many security tasks
    50 model
    51 network intrusion detection
    52 presence
    53 project
    54 promising results
    55 randomized model
    56 robustness
    57 scalable
    58 secure design choices
    59 system
    60 technique
    61 schema:name TWC: Small: Automatic Techniques for Evaluating and Hardening Machine Learning Classifiers in the Presence of Adversaries
    62 schema:recipient N51da96dfeda04f68956f3a36164ab0a7
    63 N6dd9cb37ad7341ddb262595ae98039e4
    64 Nafe7bf56db03470ab7870876becbb3ee
    65 sg:person.012364473531.84
    66 sg:person.014010553007.40
    67 sg:person.0674231734.30
    68 https://www.grid.ac/institutes/grid.27755.32
    69 schema:sameAs https://app.dimensions.ai/details/grant/grant.5540973
    70 schema:sdDatePublished 2019-03-07T12:37
    71 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    72 schema:sdPublisher Ne0aa847b41b44925961f0355df2d218c
    73 schema:startDate 2016-09-01T00:00:00Z
    74 schema:url http://www.nsf.gov/awardsearch/showAward?AWD_ID=1619098&HistoricalAwards=false
    75 sgo:license sg:explorer/license/
    76 sgo:sdDataset grants
    77 rdf:type schema:MonetaryGrant
    78 N4be8c3943c6b4523baa60673ad26363e schema:name nsf_id
    79 schema:value 1619098
    80 rdf:type schema:PropertyValue
    81 N51da96dfeda04f68956f3a36164ab0a7 schema:member sg:person.014010553007.40
    82 schema:roleName Co-PI
    83 rdf:type schema:Role
    84 N5dfbadbf470c4f71883909569a94592b schema:name dimensions_id
    85 schema:value 5540973
    86 rdf:type schema:PropertyValue
    87 N6dd9cb37ad7341ddb262595ae98039e4 schema:member sg:person.0674231734.30
    88 schema:roleName Co-PI
    89 rdf:type schema:Role
    90 Nac2bce2145e5432f8f0defd1bc6784e3 schema:currency USD
    91 schema:value 494884
    92 rdf:type schema:MonetaryAmount
    93 Nafe7bf56db03470ab7870876becbb3ee schema:member sg:person.012364473531.84
    94 schema:roleName PI
    95 rdf:type schema:Role
    96 Ne0aa847b41b44925961f0355df2d218c schema:name Springer Nature - SN SciGraph project
    97 rdf:type schema:Organization
    98 anzsrc-for:2208 schema:inDefinedTermSet anzsrc-for:
    99 rdf:type schema:DefinedTerm
    100 sg:person.012364473531.84 schema:affiliation https://www.grid.ac/institutes/grid.27755.32
    101 schema:familyName Qi
    102 schema:givenName Yanjun
    103 rdf:type schema:Person
    104 sg:person.014010553007.40 schema:affiliation https://www.grid.ac/institutes/grid.27755.32
    105 schema:familyName Weimer
    106 schema:givenName Westley
    107 rdf:type schema:Person
    108 sg:person.0674231734.30 schema:affiliation https://www.grid.ac/institutes/grid.27755.32
    109 schema:familyName Evans
    110 schema:givenName David
    111 rdf:type schema:Person
    112 https://www.grid.ac/institutes/grid.27755.32 schema:name University of Virginia Main Campus
    113 rdf:type schema:Organization
    114 https://www.grid.ac/institutes/grid.457785.c schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...