Self-Reference, Complexity, and Learning View Homepage


Ontology type: schema:MonetaryGrant     


Grant Info

YEARS

2002-2006

FUNDING AMOUNT

164063 USD

ABSTRACT

Many results in computational learning are witnessed by self-referential classes. For example, one can show that restricting learning machines to output always conjectures consistent with their data lessens learning power | as witnessed by such a class. Various kinds of algorithmic transformations of witnessing classes (which can eliminate the self-reference) preserves some learn ability results and destroys others. It is proposed to investigate this phenomenon more thoroughly for greater insight into learning. Machine learning, which is concerned with practical/empirical techniques, seeks robust learners, and, in some cases, provides consistent learners. The PI and collaborators recently showed that, if one considers a formal robustness requiring that all algorithmic transformations of learnable classes must be uniformly learnable as well, then all such resultantly difficult learning that's possible can be done by consistent machines. It is proposed to show this result does not extend to the not-necessarily-uniformly case (or that it does) with the hope of thereby gaining insight for machine learning. It is proposed to extend prior work of the PI and others to provide a theory of learning to coordinate goal-oriented tasks. U-shaped learning involves learning, unlearning, and re-learning. U-shaped learning occurs in many domains of human cognitive development (including language, understanding of temperature, understanding of weight conservation, the interaction between understanding of object tracking and object permanence, and face recognition). In the context of algorithmically learning grammars for (formal) languages from any stream of complete positive data about those languages, it has been shown by the PI and collaborators that, for some classes of learnable languages L, any machine M which learns L must exhibit, on some L in L, U-shaped learning. It is proposed to strengthen and extend this result and to characterize insightfully such classes L and with an eye to informing the cognitive scientist. Lastly, it is proposed to combine the use of type-2 feasible functional and feasible counting down from notations for constructive ordinals to obtain general concepts of feasible iterative learning. In general, the separate items proposed above are highly interconnected and mutually reinforcing toward obtaining important and unifying insights for complexity theory, machine learning, and cognitive science. More... »

URL

http://www.nsf.gov/awardsearch/showAward?AWD_ID=0208616&HistoricalAwards=false

Related SciGraph Publications

  • 2016. Program Size Complexity of Correction Grammars in the Ershov Hierarchy in PURSUIT OF THE UNIVERSAL
  • 2009. Independence Results for n-Ary Recursion Theorems in FUNDAMENTALS OF COMPUTATION THEORY
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2217", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "amount": {
          "currency": "USD", 
          "type": "MonetaryAmount", 
          "value": "164063"
        }, 
        "description": "Many results in computational learning are witnessed by self-referential classes. For example, one can show that restricting learning machines to output always conjectures consistent with their data lessens learning power | as witnessed by such a class. Various kinds of algorithmic transformations of witnessing classes (which can eliminate the self-reference) preserves some learn ability results and destroys others. It is proposed to investigate this phenomenon more thoroughly for greater insight into learning. Machine learning, which is concerned with practical/empirical techniques, seeks robust learners, and, in some cases, provides consistent learners. The PI and collaborators recently showed that, if one considers a formal robustness requiring that all algorithmic transformations of learnable classes must be uniformly learnable as well, then all such resultantly difficult learning that's possible can be done by consistent machines. It is proposed to show this result does not extend to the not-necessarily-uniformly case (or that it does) with the hope of thereby gaining insight for machine learning. It is proposed to extend prior work of the PI and others to provide a theory of learning to coordinate goal-oriented tasks. U-shaped learning involves learning, unlearning, and re-learning. U-shaped learning occurs in many domains of human cognitive development (including language, understanding of temperature, understanding of weight conservation, the interaction between understanding of object tracking and object permanence, and face recognition). In the context of algorithmically learning grammars for (formal) languages from any stream of complete positive data about those languages, it has been shown by the PI and collaborators that, for some classes of learnable languages L, any machine M which learns L must exhibit, on some L in L, U-shaped learning. It is proposed to strengthen and extend this result and to characterize insightfully such classes L and with an eye to informing the cognitive scientist. Lastly, it is proposed to combine the use of type-2 feasible functional and feasible counting down from notations for constructive ordinals to obtain general concepts of feasible iterative learning. In general, the separate items proposed above are highly interconnected and mutually reinforcing toward obtaining important and unifying insights for complexity theory, machine learning, and cognitive science.", 
        "endDate": "2006-08-31T00:00:00Z", 
        "funder": {
          "id": "https://www.grid.ac/institutes/grid.457785.c", 
          "type": "Organization"
        }, 
        "id": "sg:grant.3027368", 
        "identifier": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "3027368"
            ]
          }, 
          {
            "name": "nsf_id", 
            "type": "PropertyValue", 
            "value": [
              "0208616"
            ]
          }
        ], 
        "inLanguage": [
          "en"
        ], 
        "keywords": [
          "language", 
          "weight conservation", 
          "feasible counting", 
          "self-reference", 
          "data", 
          "machine learning", 
          "self-referential classes", 
          "consistent machines", 
          "such classes L", 
          "many results", 
          "object tracking", 
          "Pi", 
          "others", 
          "greater insight", 
          "learning", 
          "class", 
          "cases", 
          "feasible iterative learning", 
          "algorithmic transformations", 
          "output", 
          "formal robustness", 
          "complexity theory", 
          "difficult learning", 
          "cognitive science", 
          "unlearning", 
          "unifying insights", 
          "understanding", 
          "human cognitive development", 
          "constructive ordinals", 
          "cognitive scientists", 
          "learnable languages L", 
          "complexity", 
          "collaborators", 
          "many domains", 
          "hope", 
          "eyes", 
          "machine M", 
          "results", 
          "example", 
          "grammar", 
          "complete positive data", 
          "consistent learner", 
          "separate items", 
          "computational learning", 
          "temperature", 
          "insight", 
          "various kinds", 
          "context", 
          "task", 
          "notation", 
          "machine", 
          "object permanence", 
          "phenomenon", 
          "robust learners", 
          "interaction", 
          "recognition", 
          "general concept", 
          "goal", 
          "prior work", 
          "learnable classes", 
          "use", 
          "empirical techniques", 
          "streams", 
          "ability results", 
          "theory"
        ], 
        "name": "Self-Reference, Complexity, and Learning", 
        "recipient": [
          {
            "id": "https://www.grid.ac/institutes/grid.33489.35", 
            "type": "Organization"
          }, 
          {
            "affiliation": {
              "id": "https://www.grid.ac/institutes/grid.33489.35", 
              "name": "University of Delaware", 
              "type": "Organization"
            }, 
            "familyName": "Case", 
            "givenName": "John", 
            "id": "sg:person.014355411521.49", 
            "type": "Person"
          }, 
          {
            "member": "sg:person.014355411521.49", 
            "roleName": "PI", 
            "type": "Role"
          }
        ], 
        "sameAs": [
          "https://app.dimensions.ai/details/grant/grant.3027368"
        ], 
        "sdDataset": "grants", 
        "sdDatePublished": "2019-03-07T12:26", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com.uberresearch.data.processor/core_data/20181219_192338/projects/base/nsf_projects_0.xml.gz", 
        "startDate": "2002-09-01T00:00:00Z", 
        "type": "MonetaryGrant", 
        "url": "http://www.nsf.gov/awardsearch/showAward?AWD_ID=0208616&HistoricalAwards=false"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/grant.3027368'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/grant.3027368'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/grant.3027368'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/grant.3027368'


     

    This table displays all metadata directly associated to this object as RDF triples.

    109 TRIPLES      19 PREDICATES      87 URIs      79 LITERALS      5 BLANK NODES

    Subject Predicate Object
    1 sg:grant.3027368 schema:about anzsrc-for:2217
    2 schema:amount Nde6375a5909a4200a814388f31f34d92
    3 schema:description Many results in computational learning are witnessed by self-referential classes. For example, one can show that restricting learning machines to output always conjectures consistent with their data lessens learning power | as witnessed by such a class. Various kinds of algorithmic transformations of witnessing classes (which can eliminate the self-reference) preserves some learn ability results and destroys others. It is proposed to investigate this phenomenon more thoroughly for greater insight into learning. Machine learning, which is concerned with practical/empirical techniques, seeks robust learners, and, in some cases, provides consistent learners. The PI and collaborators recently showed that, if one considers a formal robustness requiring that all algorithmic transformations of learnable classes must be uniformly learnable as well, then all such resultantly difficult learning that's possible can be done by consistent machines. It is proposed to show this result does not extend to the not-necessarily-uniformly case (or that it does) with the hope of thereby gaining insight for machine learning. It is proposed to extend prior work of the PI and others to provide a theory of learning to coordinate goal-oriented tasks. U-shaped learning involves learning, unlearning, and re-learning. U-shaped learning occurs in many domains of human cognitive development (including language, understanding of temperature, understanding of weight conservation, the interaction between understanding of object tracking and object permanence, and face recognition). In the context of algorithmically learning grammars for (formal) languages from any stream of complete positive data about those languages, it has been shown by the PI and collaborators that, for some classes of learnable languages L, any machine M which learns L must exhibit, on some L in L, U-shaped learning. It is proposed to strengthen and extend this result and to characterize insightfully such classes L and with an eye to informing the cognitive scientist. Lastly, it is proposed to combine the use of type-2 feasible functional and feasible counting down from notations for constructive ordinals to obtain general concepts of feasible iterative learning. In general, the separate items proposed above are highly interconnected and mutually reinforcing toward obtaining important and unifying insights for complexity theory, machine learning, and cognitive science.
    4 schema:endDate 2006-08-31T00:00:00Z
    5 schema:funder https://www.grid.ac/institutes/grid.457785.c
    6 schema:identifier Ncf135c0aa7054927b15fb25cef36946e
    7 Neaa0ac28bdcd4a3c840c324bda274b36
    8 schema:inLanguage en
    9 schema:keywords Pi
    10 ability results
    11 algorithmic transformations
    12 cases
    13 class
    14 cognitive science
    15 cognitive scientists
    16 collaborators
    17 complete positive data
    18 complexity
    19 complexity theory
    20 computational learning
    21 consistent learner
    22 consistent machines
    23 constructive ordinals
    24 context
    25 data
    26 difficult learning
    27 empirical techniques
    28 example
    29 eyes
    30 feasible counting
    31 feasible iterative learning
    32 formal robustness
    33 general concept
    34 goal
    35 grammar
    36 greater insight
    37 hope
    38 human cognitive development
    39 insight
    40 interaction
    41 language
    42 learnable classes
    43 learnable languages L
    44 learning
    45 machine
    46 machine M
    47 machine learning
    48 many domains
    49 many results
    50 notation
    51 object permanence
    52 object tracking
    53 others
    54 output
    55 phenomenon
    56 prior work
    57 recognition
    58 results
    59 robust learners
    60 self-reference
    61 self-referential classes
    62 separate items
    63 streams
    64 such classes L
    65 task
    66 temperature
    67 theory
    68 understanding
    69 unifying insights
    70 unlearning
    71 use
    72 various kinds
    73 weight conservation
    74 schema:name Self-Reference, Complexity, and Learning
    75 schema:recipient N02151311df04418f9cc68b3ede2edcdf
    76 sg:person.014355411521.49
    77 https://www.grid.ac/institutes/grid.33489.35
    78 schema:sameAs https://app.dimensions.ai/details/grant/grant.3027368
    79 schema:sdDatePublished 2019-03-07T12:26
    80 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    81 schema:sdPublisher N9a23791c3c8f4bcb911fe6e9b8a31f00
    82 schema:startDate 2002-09-01T00:00:00Z
    83 schema:url http://www.nsf.gov/awardsearch/showAward?AWD_ID=0208616&HistoricalAwards=false
    84 sgo:license sg:explorer/license/
    85 sgo:sdDataset grants
    86 rdf:type schema:MonetaryGrant
    87 N02151311df04418f9cc68b3ede2edcdf schema:member sg:person.014355411521.49
    88 schema:roleName PI
    89 rdf:type schema:Role
    90 N9a23791c3c8f4bcb911fe6e9b8a31f00 schema:name Springer Nature - SN SciGraph project
    91 rdf:type schema:Organization
    92 Ncf135c0aa7054927b15fb25cef36946e schema:name dimensions_id
    93 schema:value 3027368
    94 rdf:type schema:PropertyValue
    95 Nde6375a5909a4200a814388f31f34d92 schema:currency USD
    96 schema:value 164063
    97 rdf:type schema:MonetaryAmount
    98 Neaa0ac28bdcd4a3c840c324bda274b36 schema:name nsf_id
    99 schema:value 0208616
    100 rdf:type schema:PropertyValue
    101 anzsrc-for:2217 schema:inDefinedTermSet anzsrc-for:
    102 rdf:type schema:DefinedTerm
    103 sg:person.014355411521.49 schema:affiliation https://www.grid.ac/institutes/grid.33489.35
    104 schema:familyName Case
    105 schema:givenName John
    106 rdf:type schema:Person
    107 https://www.grid.ac/institutes/grid.33489.35 schema:name University of Delaware
    108 rdf:type schema:Organization
    109 https://www.grid.ac/institutes/grid.457785.c schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...