Using frequency-following responses (FFRs) to evaluate the auditory function of frequency-modulation (FM) discrimination View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2017-10-23

AUTHORS

Zhen Fu, Xihong Wu, Jing Chen

ABSTRACT

Precise neural encoding of varying pitch is crucial for speech perception, especially in Mandarin. A valid evaluation of the listeners’ auditory function which accounts for the perception of pitch variation can facilitate the strategy of hearing compensation for hearing-impaired people. This auditory function has been evaluated by behavioral test in previous studies, but the objective measurement of auditory-evoked potentials, for example, is rarely studied. In this study, we investigated the scalp-recorded frequency-following responses (FFRs) evoked by frequency-modulated sweeps, and its correlation with behavioral performance on the just-noticeable differences (JNDs) of sweep slopes. The results showed that (1) the indices of FFRs varied significantly when the sweep slopes were manipulated; (2) the indices were all strongly negatively correlated with JNDs across listeners. The results suggested that the listener’s subjective JND could be predicted by the objective index of FFRs to tonal sweeps. More... »

PAGES

10

References to SciGraph publications

  • 2013-06-13. Subcortical Neural Synchrony and Absolute Thresholds Predict Frequency Discrimination Independently in JOURNAL OF THE ASSOCIATION FOR RESEARCH IN OTOLARYNGOLOGY
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7

    DOI

    http://dx.doi.org/10.1186/s40535-017-0040-7

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1092333088


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology and Cognitive Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China", 
              "id": "http://www.grid.ac/institutes/grid.11135.37", 
              "name": [
                "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fu", 
            "givenName": "Zhen", 
            "id": "sg:person.013024042412.13", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013024042412.13"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China", 
              "id": "http://www.grid.ac/institutes/grid.11135.37", 
              "name": [
                "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Wu", 
            "givenName": "Xihong", 
            "id": "sg:person.01072767734.69", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01072767734.69"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China", 
              "id": "http://www.grid.ac/institutes/grid.11135.37", 
              "name": [
                "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Jing", 
            "id": "sg:person.011736042670.70", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011736042670.70"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s10162-013-0402-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1046763444", 
              "https://doi.org/10.1007/s10162-013-0402-3"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2017-10-23", 
        "datePublishedReg": "2017-10-23", 
        "description": "Precise neural encoding of varying pitch is crucial for speech perception, especially in Mandarin. A valid evaluation of the listeners\u2019 auditory function which accounts for the perception of pitch variation can facilitate the strategy of hearing compensation for hearing-impaired people. This auditory function has been evaluated by behavioral test in previous studies, but the objective measurement of auditory-evoked potentials, for example, is rarely studied. In this study, we investigated the scalp-recorded frequency-following responses (FFRs) evoked by frequency-modulated sweeps, and its correlation with behavioral performance on the just-noticeable differences (JNDs) of sweep slopes. The results showed that (1) the indices of FFRs varied significantly when the sweep slopes were manipulated; (2) the indices were all strongly negatively correlated with JNDs across listeners. The results suggested that the listener\u2019s subjective JND could be predicted by the objective index of FFRs to tonal sweeps.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s40535-017-0040-7", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8298032", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.8125409", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1053269", 
            "issn": [
              "2196-0089"
            ], 
            "name": "Applied Informatics", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "4"
          }
        ], 
        "keywords": [
          "frequency-following response", 
          "scalp-recorded frequency-following response", 
          "hearing-impaired people", 
          "auditory function", 
          "speech perception", 
          "tonal sweeps", 
          "neural encoding", 
          "behavioral performance", 
          "frequency-modulated sweeps", 
          "auditory-evoked potentials", 
          "pitch variation", 
          "objective index", 
          "listeners", 
          "behavioral tests", 
          "perception", 
          "JND", 
          "valid evaluation", 
          "noticeable differences", 
          "encoding", 
          "Mandarin", 
          "objective measurements", 
          "discrimination", 
          "previous studies", 
          "people", 
          "pitch", 
          "index", 
          "response", 
          "performance", 
          "study", 
          "differences", 
          "results", 
          "test", 
          "strategies", 
          "function", 
          "evaluation", 
          "correlation", 
          "compensation", 
          "potential", 
          "example", 
          "sweep", 
          "measurements", 
          "slope", 
          "variation", 
          "Precise neural encoding", 
          "sweep slopes", 
          "indices of FFRs", 
          "listener\u2019s subjective JND", 
          "\u2019s subjective JND", 
          "frequency-modulation (FM) discrimination"
        ], 
        "name": "Using frequency-following responses (FFRs) to evaluate the auditory function of frequency-modulation (FM) discrimination", 
        "pagination": "10", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1092333088"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s40535-017-0040-7"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s40535-017-0040-7", 
          "https://app.dimensions.ai/details/publication/pub.1092333088"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2021-12-01T19:39", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20211201/entities/gbq_results/article/article_745.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s40535-017-0040-7"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'


     

    This table displays all metadata directly associated to this object as RDF triples.

    128 TRIPLES      22 PREDICATES      75 URIs      66 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s40535-017-0040-7 schema:about anzsrc-for:17
    2 anzsrc-for:1701
    3 schema:author N280d4b64fab3443bb7a6035f0e16b743
    4 schema:citation sg:pub.10.1007/s10162-013-0402-3
    5 schema:datePublished 2017-10-23
    6 schema:datePublishedReg 2017-10-23
    7 schema:description Precise neural encoding of varying pitch is crucial for speech perception, especially in Mandarin. A valid evaluation of the listeners’ auditory function which accounts for the perception of pitch variation can facilitate the strategy of hearing compensation for hearing-impaired people. This auditory function has been evaluated by behavioral test in previous studies, but the objective measurement of auditory-evoked potentials, for example, is rarely studied. In this study, we investigated the scalp-recorded frequency-following responses (FFRs) evoked by frequency-modulated sweeps, and its correlation with behavioral performance on the just-noticeable differences (JNDs) of sweep slopes. The results showed that (1) the indices of FFRs varied significantly when the sweep slopes were manipulated; (2) the indices were all strongly negatively correlated with JNDs across listeners. The results suggested that the listener’s subjective JND could be predicted by the objective index of FFRs to tonal sweeps.
    8 schema:genre article
    9 schema:inLanguage en
    10 schema:isAccessibleForFree true
    11 schema:isPartOf N64e37ef8d47e4da1bffef4cab064c6b1
    12 Ne745617596ff4c18bebb8f7bcadc3260
    13 sg:journal.1053269
    14 schema:keywords JND
    15 Mandarin
    16 Precise neural encoding
    17 auditory function
    18 auditory-evoked potentials
    19 behavioral performance
    20 behavioral tests
    21 compensation
    22 correlation
    23 differences
    24 discrimination
    25 encoding
    26 evaluation
    27 example
    28 frequency-following response
    29 frequency-modulated sweeps
    30 frequency-modulation (FM) discrimination
    31 function
    32 hearing-impaired people
    33 index
    34 indices of FFRs
    35 listeners
    36 listener’s subjective JND
    37 measurements
    38 neural encoding
    39 noticeable differences
    40 objective index
    41 objective measurements
    42 people
    43 perception
    44 performance
    45 pitch
    46 pitch variation
    47 potential
    48 previous studies
    49 response
    50 results
    51 scalp-recorded frequency-following response
    52 slope
    53 speech perception
    54 strategies
    55 study
    56 sweep
    57 sweep slopes
    58 test
    59 tonal sweeps
    60 valid evaluation
    61 variation
    62 ’s subjective JND
    63 schema:name Using frequency-following responses (FFRs) to evaluate the auditory function of frequency-modulation (FM) discrimination
    64 schema:pagination 10
    65 schema:productId N250d1f0584be45cea8216716513893fc
    66 N80390277511f42119acee2d721034039
    67 schema:sameAs https://app.dimensions.ai/details/publication/pub.1092333088
    68 https://doi.org/10.1186/s40535-017-0040-7
    69 schema:sdDatePublished 2021-12-01T19:39
    70 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    71 schema:sdPublisher Ne94953bb128847389a186b8fcd25be47
    72 schema:url https://doi.org/10.1186/s40535-017-0040-7
    73 sgo:license sg:explorer/license/
    74 sgo:sdDataset articles
    75 rdf:type schema:ScholarlyArticle
    76 N250d1f0584be45cea8216716513893fc schema:name dimensions_id
    77 schema:value pub.1092333088
    78 rdf:type schema:PropertyValue
    79 N280d4b64fab3443bb7a6035f0e16b743 rdf:first sg:person.013024042412.13
    80 rdf:rest Nf1666265f6e347bc8d0af90a22ab510b
    81 N64e37ef8d47e4da1bffef4cab064c6b1 schema:volumeNumber 4
    82 rdf:type schema:PublicationVolume
    83 N80390277511f42119acee2d721034039 schema:name doi
    84 schema:value 10.1186/s40535-017-0040-7
    85 rdf:type schema:PropertyValue
    86 Ne745617596ff4c18bebb8f7bcadc3260 schema:issueNumber 1
    87 rdf:type schema:PublicationIssue
    88 Ne94953bb128847389a186b8fcd25be47 schema:name Springer Nature - SN SciGraph project
    89 rdf:type schema:Organization
    90 Nf1666265f6e347bc8d0af90a22ab510b rdf:first sg:person.01072767734.69
    91 rdf:rest Nf89a724c1b1448ec97631d9e7cb268d0
    92 Nf89a724c1b1448ec97631d9e7cb268d0 rdf:first sg:person.011736042670.70
    93 rdf:rest rdf:nil
    94 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
    95 schema:name Psychology and Cognitive Sciences
    96 rdf:type schema:DefinedTerm
    97 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
    98 schema:name Psychology
    99 rdf:type schema:DefinedTerm
    100 sg:grant.8125409 http://pending.schema.org/fundedItem sg:pub.10.1186/s40535-017-0040-7
    101 rdf:type schema:MonetaryGrant
    102 sg:grant.8298032 http://pending.schema.org/fundedItem sg:pub.10.1186/s40535-017-0040-7
    103 rdf:type schema:MonetaryGrant
    104 sg:journal.1053269 schema:issn 2196-0089
    105 schema:name Applied Informatics
    106 schema:publisher Springer Nature
    107 rdf:type schema:Periodical
    108 sg:person.01072767734.69 schema:affiliation grid-institutes:grid.11135.37
    109 schema:familyName Wu
    110 schema:givenName Xihong
    111 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01072767734.69
    112 rdf:type schema:Person
    113 sg:person.011736042670.70 schema:affiliation grid-institutes:grid.11135.37
    114 schema:familyName Chen
    115 schema:givenName Jing
    116 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011736042670.70
    117 rdf:type schema:Person
    118 sg:person.013024042412.13 schema:affiliation grid-institutes:grid.11135.37
    119 schema:familyName Fu
    120 schema:givenName Zhen
    121 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013024042412.13
    122 rdf:type schema:Person
    123 sg:pub.10.1007/s10162-013-0402-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046763444
    124 https://doi.org/10.1007/s10162-013-0402-3
    125 rdf:type schema:CreativeWork
    126 grid-institutes:grid.11135.37 schema:alternateName Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China
    127 schema:name Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China
    128 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...