Ontology type: schema:Chapter Open Access: True
2006
AUTHORSJames Newsome , Brad Karp , Dawn Song
ABSTRACTDefending a server against Internet worms and defending a user’s email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class.Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms. More... »
PAGES81-105
Recent Advances in Intrusion Detection
ISBN
978-3-540-39723-6
978-3-540-39725-0
http://scigraph.springernature.com/pub.10.1007/11856214_5
DOIhttp://dx.doi.org/10.1007/11856214_5
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1028677451
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "Carnegie Mellon University",
"id": "http://www.grid.ac/institutes/grid.147455.6",
"name": [
"Carnegie Mellon University"
],
"type": "Organization"
},
"familyName": "Newsome",
"givenName": "James",
"id": "sg:person.010737772415.24",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010737772415.24"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "University College London",
"id": "http://www.grid.ac/institutes/grid.83440.3b",
"name": [
"University College London"
],
"type": "Organization"
},
"familyName": "Karp",
"givenName": "Brad",
"id": "sg:person.011606701227.49",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011606701227.49"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Carnegie Mellon University",
"id": "http://www.grid.ac/institutes/grid.147455.6",
"name": [
"Carnegie Mellon University"
],
"type": "Organization"
},
"familyName": "Song",
"givenName": "Dawn",
"id": "sg:person.01143152610.86",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01143152610.86"
],
"type": "Person"
}
],
"datePublished": "2006",
"datePublishedReg": "2006-01-01",
"description": "Defending a server against Internet worms and defending a user\u2019s email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class.Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms.",
"editor": [
{
"familyName": "Zamboni",
"givenName": "Diego",
"type": "Person"
},
{
"familyName": "Kruegel",
"givenName": "Christopher",
"type": "Person"
}
],
"genre": "chapter",
"id": "sg:pub.10.1007/11856214_5",
"inLanguage": "en",
"isAccessibleForFree": true,
"isPartOf": {
"isbn": [
"978-3-540-39723-6",
"978-3-540-39725-0"
],
"name": "Recent Advances in Intrusion Detection",
"type": "Book"
},
"keywords": [
"accurate classifier",
"signature generation algorithm",
"target class",
"email inbox",
"Internet worms",
"practical attacks",
"stream of samples",
"effective instances",
"generation algorithm",
"training pool",
"classifier",
"adversary",
"helpful teacher",
"spam",
"learning",
"attacks",
"server",
"inbox",
"algorithm",
"learners",
"instances",
"streams",
"class",
"technique",
"training",
"similarity",
"generation",
"signatures",
"success",
"content",
"certain similarities",
"setting",
"cases",
"greater extent",
"worms",
"pool",
"teachers",
"extent",
"samples",
"paper"
],
"name": "Paragraph: Thwarting Signature Learning by Training Maliciously",
"pagination": "81-105",
"productId": [
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1028677451"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/11856214_5"
]
}
],
"publisher": {
"name": "Springer Nature",
"type": "Organisation"
},
"sameAs": [
"https://doi.org/10.1007/11856214_5",
"https://app.dimensions.ai/details/publication/pub.1028677451"
],
"sdDataset": "chapters",
"sdDatePublished": "2022-05-20T07:46",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/chapter/chapter_338.jsonl",
"type": "Chapter",
"url": "https://doi.org/10.1007/11856214_5"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/11856214_5'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/11856214_5'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/11856214_5'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/11856214_5'
This table displays all metadata directly associated to this object as RDF triples.
122 TRIPLES
23 PREDICATES
66 URIs
59 LITERALS
7 BLANK NODES
Subject | Predicate | Object | |
---|---|---|---|
1 | sg:pub.10.1007/11856214_5 | schema:about | anzsrc-for:08 |
2 | ″ | ″ | anzsrc-for:0801 |
3 | ″ | schema:author | Nbc83a6b8b32f428fbd35f03e3ac6db34 |
4 | ″ | schema:datePublished | 2006 |
5 | ″ | schema:datePublishedReg | 2006-01-01 |
6 | ″ | schema:description | Defending a server against Internet worms and defending a user’s email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class.Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms. |
7 | ″ | schema:editor | N02ff19a145d04594b3bf19e5c1b0d461 |
8 | ″ | schema:genre | chapter |
9 | ″ | schema:inLanguage | en |
10 | ″ | schema:isAccessibleForFree | true |
11 | ″ | schema:isPartOf | N7c59a55308d64bad896de571a3ae3582 |
12 | ″ | schema:keywords | Internet worms |
13 | ″ | ″ | accurate classifier |
14 | ″ | ″ | adversary |
15 | ″ | ″ | algorithm |
16 | ″ | ″ | attacks |
17 | ″ | ″ | cases |
18 | ″ | ″ | certain similarities |
19 | ″ | ″ | class |
20 | ″ | ″ | classifier |
21 | ″ | ″ | content |
22 | ″ | ″ | effective instances |
23 | ″ | ″ | email inbox |
24 | ″ | ″ | extent |
25 | ″ | ″ | generation |
26 | ″ | ″ | generation algorithm |
27 | ″ | ″ | greater extent |
28 | ″ | ″ | helpful teacher |
29 | ″ | ″ | inbox |
30 | ″ | ″ | instances |
31 | ″ | ″ | learners |
32 | ″ | ″ | learning |
33 | ″ | ″ | paper |
34 | ″ | ″ | pool |
35 | ″ | ″ | practical attacks |
36 | ″ | ″ | samples |
37 | ″ | ″ | server |
38 | ″ | ″ | setting |
39 | ″ | ″ | signature generation algorithm |
40 | ″ | ″ | signatures |
41 | ″ | ″ | similarity |
42 | ″ | ″ | spam |
43 | ″ | ″ | stream of samples |
44 | ″ | ″ | streams |
45 | ″ | ″ | success |
46 | ″ | ″ | target class |
47 | ″ | ″ | teachers |
48 | ″ | ″ | technique |
49 | ″ | ″ | training |
50 | ″ | ″ | training pool |
51 | ″ | ″ | worms |
52 | ″ | schema:name | Paragraph: Thwarting Signature Learning by Training Maliciously |
53 | ″ | schema:pagination | 81-105 |
54 | ″ | schema:productId | N818d27075c4644bea861ad646e297cf7 |
55 | ″ | ″ | Ncf9789fec60a405aa56a1c7b49cc742b |
56 | ″ | schema:publisher | Nf9675524f47c46c59f8074be65421ab8 |
57 | ″ | schema:sameAs | https://app.dimensions.ai/details/publication/pub.1028677451 |
58 | ″ | ″ | https://doi.org/10.1007/11856214_5 |
59 | ″ | schema:sdDatePublished | 2022-05-20T07:46 |
60 | ″ | schema:sdLicense | https://scigraph.springernature.com/explorer/license/ |
61 | ″ | schema:sdPublisher | Nc8ec0a2f8b3a41ffba47d623a363755a |
62 | ″ | schema:url | https://doi.org/10.1007/11856214_5 |
63 | ″ | sgo:license | sg:explorer/license/ |
64 | ″ | sgo:sdDataset | chapters |
65 | ″ | rdf:type | schema:Chapter |
66 | N02ff19a145d04594b3bf19e5c1b0d461 | rdf:first | Nfb09b3b4d3634524b78aca3b1aecd926 |
67 | ″ | rdf:rest | N37886fcd76334385baf1b54266f8436c |
68 | N0d99536c462b462589c9b22ae7fb8dc2 | rdf:first | sg:person.01143152610.86 |
69 | ″ | rdf:rest | rdf:nil |
70 | N0e6bb1c557c54411809a0fa273dbd273 | schema:familyName | Kruegel |
71 | ″ | schema:givenName | Christopher |
72 | ″ | rdf:type | schema:Person |
73 | N1743df4177a149ab84c04e37742cfcba | rdf:first | sg:person.011606701227.49 |
74 | ″ | rdf:rest | N0d99536c462b462589c9b22ae7fb8dc2 |
75 | N37886fcd76334385baf1b54266f8436c | rdf:first | N0e6bb1c557c54411809a0fa273dbd273 |
76 | ″ | rdf:rest | rdf:nil |
77 | N7c59a55308d64bad896de571a3ae3582 | schema:isbn | 978-3-540-39723-6 |
78 | ″ | ″ | 978-3-540-39725-0 |
79 | ″ | schema:name | Recent Advances in Intrusion Detection |
80 | ″ | rdf:type | schema:Book |
81 | N818d27075c4644bea861ad646e297cf7 | schema:name | dimensions_id |
82 | ″ | schema:value | pub.1028677451 |
83 | ″ | rdf:type | schema:PropertyValue |
84 | Nbc83a6b8b32f428fbd35f03e3ac6db34 | rdf:first | sg:person.010737772415.24 |
85 | ″ | rdf:rest | N1743df4177a149ab84c04e37742cfcba |
86 | Nc8ec0a2f8b3a41ffba47d623a363755a | schema:name | Springer Nature - SN SciGraph project |
87 | ″ | rdf:type | schema:Organization |
88 | Ncf9789fec60a405aa56a1c7b49cc742b | schema:name | doi |
89 | ″ | schema:value | 10.1007/11856214_5 |
90 | ″ | rdf:type | schema:PropertyValue |
91 | Nf9675524f47c46c59f8074be65421ab8 | schema:name | Springer Nature |
92 | ″ | rdf:type | schema:Organisation |
93 | Nfb09b3b4d3634524b78aca3b1aecd926 | schema:familyName | Zamboni |
94 | ″ | schema:givenName | Diego |
95 | ″ | rdf:type | schema:Person |
96 | anzsrc-for:08 | schema:inDefinedTermSet | anzsrc-for: |
97 | ″ | schema:name | Information and Computing Sciences |
98 | ″ | rdf:type | schema:DefinedTerm |
99 | anzsrc-for:0801 | schema:inDefinedTermSet | anzsrc-for: |
100 | ″ | schema:name | Artificial Intelligence and Image Processing |
101 | ″ | rdf:type | schema:DefinedTerm |
102 | sg:person.010737772415.24 | schema:affiliation | grid-institutes:grid.147455.6 |
103 | ″ | schema:familyName | Newsome |
104 | ″ | schema:givenName | James |
105 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010737772415.24 |
106 | ″ | rdf:type | schema:Person |
107 | sg:person.01143152610.86 | schema:affiliation | grid-institutes:grid.147455.6 |
108 | ″ | schema:familyName | Song |
109 | ″ | schema:givenName | Dawn |
110 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01143152610.86 |
111 | ″ | rdf:type | schema:Person |
112 | sg:person.011606701227.49 | schema:affiliation | grid-institutes:grid.83440.3b |
113 | ″ | schema:familyName | Karp |
114 | ″ | schema:givenName | Brad |
115 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011606701227.49 |
116 | ″ | rdf:type | schema:Person |
117 | grid-institutes:grid.147455.6 | schema:alternateName | Carnegie Mellon University |
118 | ″ | schema:name | Carnegie Mellon University |
119 | ″ | rdf:type | schema:Organization |
120 | grid-institutes:grid.83440.3b | schema:alternateName | University College London |
121 | ″ | schema:name | University College London |
122 | ″ | rdf:type | schema:Organization |