Using frequency-following responses (FFRs) to evaluate the auditory function of frequency-modulation (FM) discrimination View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2017-12

AUTHORS

Zhen Fu, Xihong Wu, Jing Chen

ABSTRACT

Precise neural encoding of varying pitch is crucial for speech perception, especially in Mandarin. A valid evaluation of the listeners’ auditory function which accounts for the perception of pitch variation can facilitate the strategy of hearing compensation for hearing-impaired people. This auditory function has been evaluated by behavioral test in previous studies, but the objective measurement of auditory-evoked potentials, for example, is rarely studied. In this study, we investigated the scalp-recorded frequency-following responses (FFRs) evoked by frequency-modulated sweeps, and its correlation with behavioral performance on the just-noticeable differences (JNDs) of sweep slopes. The results showed that (1) the indices of FFRs varied significantly when the sweep slopes were manipulated; (2) the indices were all strongly negatively correlated with JNDs across listeners. The results suggested that the listener’s subjective JND could be predicted by the objective index of FFRs to tonal sweeps. More... »

PAGES

10

References to SciGraph publications

  • 2013-10. Subcortical Neural Synchrony and Absolute Thresholds Predict Frequency Discrimination Independently in JOURNAL OF THE ASSOCIATION FOR RESEARCH IN OTOLARYNGOLOGY
  • Journal

    TITLE

    Applied Informatics

    ISSUE

    1

    VOLUME

    4

    Author Affiliations

    Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7

    DOI

    http://dx.doi.org/10.1186/s40535-017-0040-7

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1092333088


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Psychology and Cognitive Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Peking University", 
              "id": "https://www.grid.ac/institutes/grid.11135.37", 
              "name": [
                "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fu", 
            "givenName": "Zhen", 
            "id": "sg:person.013024042412.13", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013024042412.13"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Peking University", 
              "id": "https://www.grid.ac/institutes/grid.11135.37", 
              "name": [
                "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Wu", 
            "givenName": "Xihong", 
            "id": "sg:person.01072767734.69", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01072767734.69"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Peking University", 
              "id": "https://www.grid.ac/institutes/grid.11135.37", 
              "name": [
                "Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Jing", 
            "id": "sg:person.0653227041.30", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0653227041.30"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.1016/j.heares.2016.12.004", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1013294538"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.3766/jaaa.25.8.2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016600882"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1097/aud.0b013e3181cdb272", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016808133"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1097/aud.0b013e3181cdb272", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016808133"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1097/aud.0b013e3181cdb272", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016808133"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1097/aud.0b013e3181cdb272", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016808133"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1044/1092-4388(2011/10-0282)", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020889889"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.heares.2008.08.004", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032173250"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.heares.2011.02.005", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038041250"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.3109/14992027.2010.515620", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040095744"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10162-013-0402-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1046763444", 
              "https://doi.org/10.1007/s10162-013-0402-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.3109/14992027.2013.834537", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1048326974"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1055/s-0029-1215439", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1057187510"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1121/1.1912375", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1062277965"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1121/1.3097469", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1062321131"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1121/1.411968", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1062363256"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1121/1.4820887", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1062401681"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://app.dimensions.ai/details/publication/pub.1074980431", 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.3389/fnhum.2017.00036", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1083693420"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2017-12", 
        "datePublishedReg": "2017-12-01", 
        "description": "Precise neural encoding of varying pitch is crucial for speech perception, especially in Mandarin. A valid evaluation of the listeners\u2019 auditory function which accounts for the perception of pitch variation can facilitate the strategy of hearing compensation for hearing-impaired people. This auditory function has been evaluated by behavioral test in previous studies, but the objective measurement of auditory-evoked potentials, for example, is rarely studied. In this study, we investigated the scalp-recorded frequency-following responses (FFRs) evoked by frequency-modulated sweeps, and its correlation with behavioral performance on the just-noticeable differences (JNDs) of sweep slopes. The results showed that (1) the indices of FFRs varied significantly when the sweep slopes were manipulated; (2) the indices were all strongly negatively correlated with JNDs across listeners. The results suggested that the listener\u2019s subjective JND could be predicted by the objective index of FFRs to tonal sweeps.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1186/s40535-017-0040-7", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1053269", 
            "issn": [
              "2196-0089"
            ], 
            "name": "Applied Informatics", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "4"
          }
        ], 
        "name": "Using frequency-following responses (FFRs) to evaluate the auditory function of frequency-modulation (FM) discrimination", 
        "pagination": "10", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "811417e290497e4213012851d7eda42ee8d5101ee31822799e4a628c15391d9d"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s40535-017-0040-7"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1092333088"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s40535-017-0040-7", 
          "https://app.dimensions.ai/details/publication/pub.1092333088"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-10T14:23", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8660_00000601.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://link.springer.com/10.1186%2Fs40535-017-0040-7"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s40535-017-0040-7'


     

    This table displays all metadata directly associated to this object as RDF triples.

    122 TRIPLES      21 PREDICATES      43 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s40535-017-0040-7 schema:about anzsrc-for:17
    2 anzsrc-for:1701
    3 schema:author Ndee7d6a7b9ac4213be25fb7223c52fcd
    4 schema:citation sg:pub.10.1007/s10162-013-0402-3
    5 https://app.dimensions.ai/details/publication/pub.1074980431
    6 https://doi.org/10.1016/j.heares.2008.08.004
    7 https://doi.org/10.1016/j.heares.2011.02.005
    8 https://doi.org/10.1016/j.heares.2016.12.004
    9 https://doi.org/10.1044/1092-4388(2011/10-0282)
    10 https://doi.org/10.1055/s-0029-1215439
    11 https://doi.org/10.1097/aud.0b013e3181cdb272
    12 https://doi.org/10.1121/1.1912375
    13 https://doi.org/10.1121/1.3097469
    14 https://doi.org/10.1121/1.411968
    15 https://doi.org/10.1121/1.4820887
    16 https://doi.org/10.3109/14992027.2010.515620
    17 https://doi.org/10.3109/14992027.2013.834537
    18 https://doi.org/10.3389/fnhum.2017.00036
    19 https://doi.org/10.3766/jaaa.25.8.2
    20 schema:datePublished 2017-12
    21 schema:datePublishedReg 2017-12-01
    22 schema:description Precise neural encoding of varying pitch is crucial for speech perception, especially in Mandarin. A valid evaluation of the listeners’ auditory function which accounts for the perception of pitch variation can facilitate the strategy of hearing compensation for hearing-impaired people. This auditory function has been evaluated by behavioral test in previous studies, but the objective measurement of auditory-evoked potentials, for example, is rarely studied. In this study, we investigated the scalp-recorded frequency-following responses (FFRs) evoked by frequency-modulated sweeps, and its correlation with behavioral performance on the just-noticeable differences (JNDs) of sweep slopes. The results showed that (1) the indices of FFRs varied significantly when the sweep slopes were manipulated; (2) the indices were all strongly negatively correlated with JNDs across listeners. The results suggested that the listener’s subjective JND could be predicted by the objective index of FFRs to tonal sweeps.
    23 schema:genre research_article
    24 schema:inLanguage en
    25 schema:isAccessibleForFree true
    26 schema:isPartOf N62542a47dcf04726811404cf466e8200
    27 Nee27b7c5206a4056a3eb9f9ee4fe8b5e
    28 sg:journal.1053269
    29 schema:name Using frequency-following responses (FFRs) to evaluate the auditory function of frequency-modulation (FM) discrimination
    30 schema:pagination 10
    31 schema:productId N20bdc48c566a42398cf25865e0b7f730
    32 N5c316139ab7747528f5d04853dbde020
    33 Nfaf45a403f054bee8fb6889930787caa
    34 schema:sameAs https://app.dimensions.ai/details/publication/pub.1092333088
    35 https://doi.org/10.1186/s40535-017-0040-7
    36 schema:sdDatePublished 2019-04-10T14:23
    37 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    38 schema:sdPublisher N7c031d40f69849df93ceb7f2789df2fb
    39 schema:url https://link.springer.com/10.1186%2Fs40535-017-0040-7
    40 sgo:license sg:explorer/license/
    41 sgo:sdDataset articles
    42 rdf:type schema:ScholarlyArticle
    43 N20bdc48c566a42398cf25865e0b7f730 schema:name dimensions_id
    44 schema:value pub.1092333088
    45 rdf:type schema:PropertyValue
    46 N4b796c08f6fb419a9cfefc201dfad7d2 rdf:first sg:person.0653227041.30
    47 rdf:rest rdf:nil
    48 N50e6dfc359bc4e779cf5d4e34f7e0435 rdf:first sg:person.01072767734.69
    49 rdf:rest N4b796c08f6fb419a9cfefc201dfad7d2
    50 N5c316139ab7747528f5d04853dbde020 schema:name doi
    51 schema:value 10.1186/s40535-017-0040-7
    52 rdf:type schema:PropertyValue
    53 N62542a47dcf04726811404cf466e8200 schema:volumeNumber 4
    54 rdf:type schema:PublicationVolume
    55 N7c031d40f69849df93ceb7f2789df2fb schema:name Springer Nature - SN SciGraph project
    56 rdf:type schema:Organization
    57 Ndee7d6a7b9ac4213be25fb7223c52fcd rdf:first sg:person.013024042412.13
    58 rdf:rest N50e6dfc359bc4e779cf5d4e34f7e0435
    59 Nee27b7c5206a4056a3eb9f9ee4fe8b5e schema:issueNumber 1
    60 rdf:type schema:PublicationIssue
    61 Nfaf45a403f054bee8fb6889930787caa schema:name readcube_id
    62 schema:value 811417e290497e4213012851d7eda42ee8d5101ee31822799e4a628c15391d9d
    63 rdf:type schema:PropertyValue
    64 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
    65 schema:name Psychology and Cognitive Sciences
    66 rdf:type schema:DefinedTerm
    67 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
    68 schema:name Psychology
    69 rdf:type schema:DefinedTerm
    70 sg:journal.1053269 schema:issn 2196-0089
    71 schema:name Applied Informatics
    72 rdf:type schema:Periodical
    73 sg:person.01072767734.69 schema:affiliation https://www.grid.ac/institutes/grid.11135.37
    74 schema:familyName Wu
    75 schema:givenName Xihong
    76 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01072767734.69
    77 rdf:type schema:Person
    78 sg:person.013024042412.13 schema:affiliation https://www.grid.ac/institutes/grid.11135.37
    79 schema:familyName Fu
    80 schema:givenName Zhen
    81 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013024042412.13
    82 rdf:type schema:Person
    83 sg:person.0653227041.30 schema:affiliation https://www.grid.ac/institutes/grid.11135.37
    84 schema:familyName Chen
    85 schema:givenName Jing
    86 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0653227041.30
    87 rdf:type schema:Person
    88 sg:pub.10.1007/s10162-013-0402-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046763444
    89 https://doi.org/10.1007/s10162-013-0402-3
    90 rdf:type schema:CreativeWork
    91 https://app.dimensions.ai/details/publication/pub.1074980431 schema:CreativeWork
    92 https://doi.org/10.1016/j.heares.2008.08.004 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032173250
    93 rdf:type schema:CreativeWork
    94 https://doi.org/10.1016/j.heares.2011.02.005 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038041250
    95 rdf:type schema:CreativeWork
    96 https://doi.org/10.1016/j.heares.2016.12.004 schema:sameAs https://app.dimensions.ai/details/publication/pub.1013294538
    97 rdf:type schema:CreativeWork
    98 https://doi.org/10.1044/1092-4388(2011/10-0282) schema:sameAs https://app.dimensions.ai/details/publication/pub.1020889889
    99 rdf:type schema:CreativeWork
    100 https://doi.org/10.1055/s-0029-1215439 schema:sameAs https://app.dimensions.ai/details/publication/pub.1057187510
    101 rdf:type schema:CreativeWork
    102 https://doi.org/10.1097/aud.0b013e3181cdb272 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016808133
    103 rdf:type schema:CreativeWork
    104 https://doi.org/10.1121/1.1912375 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062277965
    105 rdf:type schema:CreativeWork
    106 https://doi.org/10.1121/1.3097469 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062321131
    107 rdf:type schema:CreativeWork
    108 https://doi.org/10.1121/1.411968 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062363256
    109 rdf:type schema:CreativeWork
    110 https://doi.org/10.1121/1.4820887 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062401681
    111 rdf:type schema:CreativeWork
    112 https://doi.org/10.3109/14992027.2010.515620 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040095744
    113 rdf:type schema:CreativeWork
    114 https://doi.org/10.3109/14992027.2013.834537 schema:sameAs https://app.dimensions.ai/details/publication/pub.1048326974
    115 rdf:type schema:CreativeWork
    116 https://doi.org/10.3389/fnhum.2017.00036 schema:sameAs https://app.dimensions.ai/details/publication/pub.1083693420
    117 rdf:type schema:CreativeWork
    118 https://doi.org/10.3766/jaaa.25.8.2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016600882
    119 rdf:type schema:CreativeWork
    120 https://www.grid.ac/institutes/grid.11135.37 schema:alternateName Peking University
    121 schema:name Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, 100871, Beijing, China
    122 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...