Effect of Computer-Assisted Speech Training on Speech Recognition and Subjective Benefits for Hearing Aid Users With Severe to Profound Prelingual ... View Homepage


Ontology type: schema:MedicalStudy     


Clinical Trial Info

YEARS

2007-2008

ABSTRACT

Computer-assisted speech training is a speech recognition training system developed for cochlear implant users. With minimal facilities and skills, cochlear implant users can conduct this training at home. The purpose of this study was to apply this system to adolescent and young adult hearing aid users with prelingual severe to profound hearing loss. Detailed Description Introduction Sensorineural hearing loss (SNHL) is a disability affecting people worldwide, and the prevalence is expected to increase due to prolonged life expectancy. SNHL has a significant negative impact on the quality of life, especially in prelingually deafened children. Except for certain diseases such as sudden deafness or endolymphatic hydrops, which may be treated or alleviated by medication or surgery, most patients with SNHL have to wear hearing aids or undergo cochlear implantation to regain hearing. However, for many individuals these measures do not satisfactorily resolve communication problems, because hearing is only the first step in a series of events leading to communication. Between hearing and communication lie the important skills of listening and comprehension, and to achieve successful communication it has been suggested that patients receiving amplification should be offered some type of audiological rehabilitation. It has been reported that older subjects do not spontaneously acclimatize to wearing a hearing aid, or that the effects are either small or nonexistent, which emphasizes the importance of rehabilitation after wearing a hearing aid. Unfortunately, not everyone with SNHL in Taiwan receives this kind of rehabilitation. The reasons for this may be: (a) methods of rehabilitation are not familiar to all clinicians or speech pathologists; (b) there is a shortage of clinicians or speech pathologists to provide such time-consuming rehabilitation; (c) hearing impaired patients may be unable to afford or are unwilling to dedicate time to rehabilitation; and (d) it is difficult to measure the improvements provided by rehabilitation. Recently, rehabilitative training procedures have been garnering interest due to technological advances enabling a hearing aid user to perform the procedures while at home using a personal computer. Burk et al trained young normal-hearing and older hearing-impaired listeners with digitally recorded training materials using a computer. The results showed that older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker, and to some degree achieve the same level as young normal-hearing listeners. In addition, the improved performance was maintained across talkers and across time. The computer-aided speechreading training (CAST) system was developed to simulate a face-to-face training intervention and was designed to be one component of a comprehensive aural rehabilitation program for preretirement adults with acquired mild-to-moderate hearing loss. The aim of the training was to enhance speechreading skills to complement auditory speech perception. Throughout the training, the learner views a monitor that shows either a computer-generated screen or a videotaped recording of the teacher. CAST was designed to be used by a clinician to extend rather than to replace existing rehabilitative techniques. Computer-based training has also been applied to the rehabilitation of cochlear implant users. Before the development of computer-based training, some studies assessed the effects of limited training on the speech-recognition skills of poorer-performing cochlear implant users. Busby et al conducted ten 1-hour speech perception and production training sessions, and the results demonstrated minimal changes in perceptual abilities in three cochlear implant users. Dawson and Clark conducted one 50-minute training session per week for 10 weeks, and four of five subjects showed some measure of improvement. The limited success of these attempts to improve the speech-recognition abilities of cochlear implant users was thought to be due to an inadequate amount of training. More intensive training of cochlear implant users was predicted to be effective, because in normal hearing populations training has been shown to successfully improve speech segment discrimination and identification, and recognition on spectrally shifted speech. Fu et al reported encouraging results in the rehabilitation of cochlear implant users using a computer-assisted speech training system which they also called CAST, although this was different to the CAST system of Pichora-Fuller and Benguerel. The CAST system of Fu et al, developed at the House Ear Institute, contains a large database of training materials and can be installed on personal computers, and so with minimal facilities and skills, cochlear implant users can conduct the training at home, and clinicians or speech pathologists can monitor the subject's test score and training progress. The results demonstrated that after moderate amounts of training (1 hour per day, 5 days per week), all 10 postlingually deafened adult cochlear implant users in the study had significant improvements in vowel and consonant-recognition scores. Wu et al applied the CAST system to 10 Mandarin-speaking children (three hearing aid users and seven cochlear implant users). After training for half an hour a day, 5 days a week, for a period of 10 weeks, the subjects showed significant improvements in vowel, consonant and Chinese tone performance. This improved performance was largely retained for 2 months after the training had been completed. Stacey and Summerfield also used computer-based auditory training to improve the perception of noise. The results confirmed that the training helped to overcome the effects of spectral distortions in speech, and the training materials were most effective when several talkers were included. Based on these previous studies, cochlear implant users can improve their speech recognition ability after training with a CAST system. If this system is also effective for hearing aid users, and especially prelingually deafened patients, the CAST system will have a substantially positive impact, as there are many more hearing aid users than cochlear implant users. The purpose of this study was to train prelingually deafened adolescents and young adults with CAST and measure the benefits objectively and subjectively. The objective benefits were measured using published speech recognition tests [13], and the subjective benefits were measured using client-oriented scale of improvement (COSI). Materials and Methods Subjects Fifteen hearing aid users with prelingual severe to profound hearing loss participated in this study. Another six hearing aid users with a similar age and hearing average were included as the control group. The inclusion criteria for the study subjects and controls were: (1) age above 15 years; (2) wearing a hearing aid for at least for 2 years after hearing loss was diagnosed; (3) basic ability to operate a computer; (4) Mandarin Chinese speaker; and (5) motivation to undertake the training program. The exclusion criteria were: (1) aided hearing average worse than 70 dBHL; (2) unable to operate a computer. Before training with CAST, all participants received unaided and aided sound field audiometry. Table 1 shows the basic information of the 21 participants. Client-oriented scale of improvement (COSI) We use a COSI questionnaire to evaluate subjective benefits. Before training with the CAST system, both the training and control groups were asked to identify up five specific situations in which they would like to cope better. At the end of the training, for each situation they were asked (A) how much better (or worse) they could now hear, and (B) how well they were now able to cope. For scaling purposes, the responses were assigned scores from 1 to 5, with 5 corresponding to "much better" and "almost always", 4 corresponding to "better" and "most of the time", 3 corresponding to "slightly better" and "half the time", 2 corresponding to "no difference" and "occasionally", and 1 corresponding to "worse" and "hardly ever", for questions A and B, respectively. Question A was defined as an "improvement", and question B was defined as "final ability". The total scores of the five situations were compared between the training and control groups. Test materials and procedures The speech recognition test materials including monosyllabic words, disyllabic spondee words, vowels, consonants and Chinese tone recognition tests were recorded onto a CD-ROM at Melody Medical Instruments Corp. by a male and female speaker. The test materials were displayed on a laptop computer connected to a GSI 61TM clinical audiometer (Grason-Stadler, USA) at an output level of 70 dBHL. The testing procedure was performed in a double-walled, sound-treated room. Monosyllabic Chinese word recognition test materials included four blocks of 25 Chinese words. For each speech recognition test, 50 words were selected resulting in a set of 50 tokens. After a monosyllabic Chinese word was displayed, the participants were asked to write down the word. Four different sets of open-set tests were generated for each speech recognition test. Disyllabic Chinese spondee-word recognition test materials included two blocks of Chinese spondee-words, each block containing 36 Chinese spondee-words. For each speech recognition test, one block was selected resulting in a set of 36 tokens. After a Chinese spondee-word was displayed, the participants were asked to write down the word. Four different sets of open-set test were generated via changing the order of the materials for each speech recognition test. Vowel recognition test materials included 16 Chinese words. Vowel recognition was measured using a 4-alternative, forced-choice procedure in which Chinese characters were shown on the choice list. For each speech recognition test, the order of the words was changed. Thus, four different sets of closed-set tests were generated. Consonant recognition test materials included 21 Chinese words. Consonant recognition was measured using a 4-alternative, forced-choice procedure in which a Chinese character was shown on the choice list. For each speech recognition test, the order of the words was changed, and thus four different sets of closed-set tests were generated. Chinese tone recognition test materials included 50 Mandarin Chinese words. The participants were asked to write down the Chinese tone (tone: 1: flat; 2: rising; 3: falling-rising; 4: falling) after the Chinese word was displayed. For each speech recognition test, the order of the words was changed, and thus four different sets of open-set tests were generated. Before training, both groups underwent a series of speech recognition tests as baseline data. The training group then started training whereas the control group did not receive any training. Every 4 weeks, the participants returned to the lab for another series of speech recognition tests using different test materials. Every participant had received a total of four speech recognition tests by the end of the study. Training tools and procedures CAST software developed at the House Ear Institute and distributed by Melody Medical Instrument Corp. was used as the training tool. The training group was instructed to train at home following the program for at least 1 hour per day, 3 days a week, for 12 successive weeks. The control group did not receive any training and returned to the lab every 4 weeks for speech recognition tests. For each participant in the training group, a baseline speech recognition test was performed after the software had been installed into his or her personal computer. The results were analyzed by the software which then automatically generated a targeted training program. The software contained a large amount of information including pure tone, vowel recognition, consonant recognition, tone recognition, speaker recognition, environmental sounds, occasional words and occasional sentences. The subjects were asked to focus on pure tone, vowel recognition, consonant recognition and tone recognition training. The subjects started the training at a level generated by the computer software. There were usually five levels of difficulty in each training category, and each level consisted of several training sessions. For pure tone recognition training, the subjects were asked to choose the sound different to the others. Visual feedback was provided as to whether the response was correct or incorrect. After a training session had been completed, the score was calculated. If the score exceeded 80, the training proceeded to a higher level. If the score did not exceed 80, the training session was repeated until the score exceeded 80. At a higher level of training sessions, the differences between speech features in the response choices were reduced. For vowel recognition training, the subjects were asked to choose the vowel different to the others. After the subjects had progressed beyond the 3-alternative forced-choice discrimination task, they were trained to identify final vowels. Similar training procedures were used for consonant and tone recognition training. Each subject in the training group was asked to register on the Melody Medical Instrument Corp. website, and his or her username and password were provided to us. Therefore, we were able to monitor the total time spent training, and the training time and score for each exercise. If the subjects did not reach the required amount of time and training sessions, we contacted their family and encouraged them to do more training. Statistical methods All statistical analyses were performed with SAS software (Version 9.1.3, SAS Institute Inc., Cary, NC, U.S.A.) and R software (Version 2.7). Two-sided p values of 0.05 or less were considered to be statistically significant. Continuous data were expressed as mean ± standard deviation (SD) unless otherwise specified. Percentages were calculated for categorical variables. Two-sample t tests or Wilcoxon rank-sum tests were used to compare the means or medians of continuous data between two groups, whereas the chi-squared test or Fisher's exact test was used to analyze categorical proportions between two groups. In addition to univariate analyses, the data of the five speech recognition tests were analyzed by fitting multiple marginal linear regression models using generalized estimating equations. If the first-order autocorrelation (i.e., AR(1)) structure fit the repeated measures data well, the model-based standard error estimates were used in the generalized estimating equations analysis; otherwise, the empirical standard error estimates were reported. In addition, the data of COSI were analyzed by fitting multiple linear regression models. Basic model-fitting techniques for variable selection, goodness-of-fit assessment, and regression diagnostics were used in our regression analyses to ensure the quality of the results. In stepwise variable selection, all of the univariate significant and non-significant covariates were considered, and both the significance levels for entry and for stay were set to 0.15 or larger. The goodness-of-fit measure, the coefficient of determination (R2), was computed for all of the linear regression models, which is the square of the correlation between the observed response variable and the predicted value. It had a value between 0 and 1, with a larger value indicating a better fit of the multiple linear regression model to the observed continuous data. In addition, the variance inflation factor was examined to detect potential multicollinearity problems (defined as a value ≥ 10). More... »

URL

https://clinicaltrials.gov/show/NCT02092337

Related SciGraph Publications

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/3468", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/3053", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "type": "DefinedTerm"
      }
    ], 
    "description": "Computer-assisted speech training is a speech recognition training system developed for cochlear implant users. With minimal facilities and skills, cochlear implant users can conduct this training at home. The purpose of this study was to apply this system to adolescent and young adult hearing aid users with prelingual severe to profound hearing loss.\n\nDetailed Description\nIntroduction Sensorineural hearing loss (SNHL) is a disability affecting people worldwide, and the prevalence is expected to increase due to prolonged life expectancy. SNHL has a significant negative impact on the quality of life, especially in prelingually deafened children. Except for certain diseases such as sudden deafness or endolymphatic hydrops, which may be treated or alleviated by medication or surgery, most patients with SNHL have to wear hearing aids or undergo cochlear implantation to regain hearing. However, for many individuals these measures do not satisfactorily resolve communication problems, because hearing is only the first step in a series of events leading to communication. Between hearing and communication lie the important skills of listening and comprehension, and to achieve successful communication it has been suggested that patients receiving amplification should be offered some type of audiological rehabilitation. It has been reported that older subjects do not spontaneously acclimatize to wearing a hearing aid, or that the effects are either small or nonexistent, which emphasizes the importance of rehabilitation after wearing a hearing aid. Unfortunately, not everyone with SNHL in Taiwan receives this kind of rehabilitation. The reasons for this may be: (a) methods of rehabilitation are not familiar to all clinicians or speech pathologists; (b) there is a shortage of clinicians or speech pathologists to provide such time-consuming rehabilitation; (c) hearing impaired patients may be unable to afford or are unwilling to dedicate time to rehabilitation; and (d) it is difficult to measure the improvements provided by rehabilitation. Recently, rehabilitative training procedures have been garnering interest due to technological advances enabling a hearing aid user to perform the procedures while at home using a personal computer. Burk et al trained young normal-hearing and older hearing-impaired listeners with digitally recorded training materials using a computer. The results showed that older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker, and to some degree achieve the same level as young normal-hearing listeners. In addition, the improved performance was maintained across talkers and across time. The computer-aided speechreading training (CAST) system was developed to simulate a face-to-face training intervention and was designed to be one component of a comprehensive aural rehabilitation program for preretirement adults with acquired mild-to-moderate hearing loss. The aim of the training was to enhance speechreading skills to complement auditory speech perception. Throughout the training, the learner views a monitor that shows either a computer-generated screen or a videotaped recording of the teacher. CAST was designed to be used by a clinician to extend rather than to replace existing rehabilitative techniques. Computer-based training has also been applied to the rehabilitation of cochlear implant users. Before the development of computer-based training, some studies assessed the effects of limited training on the speech-recognition skills of poorer-performing cochlear implant users. Busby et al conducted ten 1-hour speech perception and production training sessions, and the results demonstrated minimal changes in perceptual abilities in three cochlear implant users. Dawson and Clark conducted one 50-minute training session per week for 10 weeks, and four of five subjects showed some measure of improvement. The limited success of these attempts to improve the speech-recognition abilities of cochlear implant users was thought to be due to an inadequate amount of training. More intensive training of cochlear implant users was predicted to be effective, because in normal hearing populations training has been shown to successfully improve speech segment discrimination and identification, and recognition on spectrally shifted speech. Fu et al reported encouraging results in the rehabilitation of cochlear implant users using a computer-assisted speech training system which they also called CAST, although this was different to the CAST system of Pichora-Fuller and Benguerel. The CAST system of Fu et al, developed at the House Ear Institute, contains a large database of training materials and can be installed on personal computers, and so with minimal facilities and skills, cochlear implant users can conduct the training at home, and clinicians or speech pathologists can monitor the subject's test score and training progress. The results demonstrated that after moderate amounts of training (1 hour per day, 5 days per week), all 10 postlingually deafened adult cochlear implant users in the study had significant improvements in vowel and consonant-recognition scores. Wu et al applied the CAST system to 10 Mandarin-speaking children (three hearing aid users and seven cochlear implant users). After training for half an hour a day, 5 days a week, for a period of 10 weeks, the subjects showed significant improvements in vowel, consonant and Chinese tone performance. This improved performance was largely retained for 2 months after the training had been completed. Stacey and Summerfield also used computer-based auditory training to improve the perception of noise. The results confirmed that the training helped to overcome the effects of spectral distortions in speech, and the training materials were most effective when several talkers were included. Based on these previous studies, cochlear implant users can improve their speech recognition ability after training with a CAST system. If this system is also effective for hearing aid users, and especially prelingually deafened patients, the CAST system will have a substantially positive impact, as there are many more hearing aid users than cochlear implant users. The purpose of this study was to train prelingually deafened adolescents and young adults with CAST and measure the benefits objectively and subjectively. The objective benefits were measured using published speech recognition tests [13], and the subjective benefits were measured using client-oriented scale of improvement (COSI). Materials and Methods Subjects Fifteen hearing aid users with prelingual severe to profound hearing loss participated in this study. Another six hearing aid users with a similar age and hearing average were included as the control group. The inclusion criteria for the study subjects and controls were: (1) age above 15 years; (2) wearing a hearing aid for at least for 2 years after hearing loss was diagnosed; (3) basic ability to operate a computer; (4) Mandarin Chinese speaker; and (5) motivation to undertake the training program. The exclusion criteria were: (1) aided hearing average worse than 70 dBHL; (2) unable to operate a computer. Before training with CAST, all participants received unaided and aided sound field audiometry. Table 1 shows the basic information of the 21 participants. Client-oriented scale of improvement (COSI) We use a COSI questionnaire to evaluate subjective benefits. Before training with the CAST system, both the training and control groups were asked to identify up five specific situations in which they would like to cope better. At the end of the training, for each situation they were asked (A) how much better (or worse) they could now hear, and (B) how well they were now able to cope. For scaling purposes, the responses were assigned scores from 1 to 5, with 5 corresponding to \"much better\" and \"almost always\", 4 corresponding to \"better\" and \"most of the time\", 3 corresponding to \"slightly better\" and \"half the time\", 2 corresponding to \"no difference\" and \"occasionally\", and 1 corresponding to \"worse\" and \"hardly ever\", for questions A and B, respectively. Question A was defined as an \"improvement\", and question B was defined as \"final ability\". The total scores of the five situations were compared between the training and control groups. Test materials and procedures The speech recognition test materials including monosyllabic words, disyllabic spondee words, vowels, consonants and Chinese tone recognition tests were recorded onto a CD-ROM at Melody Medical Instruments Corp. by a male and female speaker. The test materials were displayed on a laptop computer connected to a GSI 61TM clinical audiometer (Grason-Stadler, USA) at an output level of 70 dBHL. The testing procedure was performed in a double-walled, sound-treated room. Monosyllabic Chinese word recognition test materials included four blocks of 25 Chinese words. For each speech recognition test, 50 words were selected resulting in a set of 50 tokens. After a monosyllabic Chinese word was displayed, the participants were asked to write down the word. Four different sets of open-set tests were generated for each speech recognition test. Disyllabic Chinese spondee-word recognition test materials included two blocks of Chinese spondee-words, each block containing 36 Chinese spondee-words. For each speech recognition test, one block was selected resulting in a set of 36 tokens. After a Chinese spondee-word was displayed, the participants were asked to write down the word. Four different sets of open-set test were generated via changing the order of the materials for each speech recognition test. Vowel recognition test materials included 16 Chinese words. Vowel recognition was measured using a 4-alternative, forced-choice procedure in which Chinese characters were shown on the choice list. For each speech recognition test, the order of the words was changed. Thus, four different sets of closed-set tests were generated. Consonant recognition test materials included 21 Chinese words. Consonant recognition was measured using a 4-alternative, forced-choice procedure in which a Chinese character was shown on the choice list. For each speech recognition test, the order of the words was changed, and thus four different sets of closed-set tests were generated. Chinese tone recognition test materials included 50 Mandarin Chinese words. The participants were asked to write down the Chinese tone (tone: 1: flat; 2: rising; 3: falling-rising; 4: falling) after the Chinese word was displayed. For each speech recognition test, the order of the words was changed, and thus four different sets of open-set tests were generated. Before training, both groups underwent a series of speech recognition tests as baseline data. The training group then started training whereas the control group did not receive any training. Every 4 weeks, the participants returned to the lab for another series of speech recognition tests using different test materials. Every participant had received a total of four speech recognition tests by the end of the study. Training tools and procedures CAST software developed at the House Ear Institute and distributed by Melody Medical Instrument Corp. was used as the training tool. The training group was instructed to train at home following the program for at least 1 hour per day, 3 days a week, for 12 successive weeks. The control group did not receive any training and returned to the lab every 4 weeks for speech recognition tests. For each participant in the training group, a baseline speech recognition test was performed after the software had been installed into his or her personal computer. The results were analyzed by the software which then automatically generated a targeted training program. The software contained a large amount of information including pure tone, vowel recognition, consonant recognition, tone recognition, speaker recognition, environmental sounds, occasional words and occasional sentences. The subjects were asked to focus on pure tone, vowel recognition, consonant recognition and tone recognition training. The subjects started the training at a level generated by the computer software. There were usually five levels of difficulty in each training category, and each level consisted of several training sessions. For pure tone recognition training, the subjects were asked to choose the sound different to the others. Visual feedback was provided as to whether the response was correct or incorrect. After a training session had been completed, the score was calculated. If the score exceeded 80, the training proceeded to a higher level. If the score did not exceed 80, the training session was repeated until the score exceeded 80. At a higher level of training sessions, the differences between speech features in the response choices were reduced. For vowel recognition training, the subjects were asked to choose the vowel different to the others. After the subjects had progressed beyond the 3-alternative forced-choice discrimination task, they were trained to identify final vowels. Similar training procedures were used for consonant and tone recognition training. Each subject in the training group was asked to register on the Melody Medical Instrument Corp. website, and his or her username and password were provided to us. Therefore, we were able to monitor the total time spent training, and the training time and score for each exercise. If the subjects did not reach the required amount of time and training sessions, we contacted their family and encouraged them to do more training. Statistical methods All statistical analyses were performed with SAS software (Version 9.1.3, SAS Institute Inc., Cary, NC, U.S.A.) and R software (Version 2.7). Two-sided p values of 0.05 or less were considered to be statistically significant. Continuous data were expressed as mean \u00b1 standard deviation (SD) unless otherwise specified. Percentages were calculated for categorical variables. Two-sample t tests or Wilcoxon rank-sum tests were used to compare the means or medians of continuous data between two groups, whereas the chi-squared test or Fisher's exact test was used to analyze categorical proportions between two groups. In addition to univariate analyses, the data of the five speech recognition tests were analyzed by fitting multiple marginal linear regression models using generalized estimating equations. If the first-order autocorrelation (i.e., AR(1)) structure fit the repeated measures data well, the model-based standard error estimates were used in the generalized estimating equations analysis; otherwise, the empirical standard error estimates were reported. In addition, the data of COSI were analyzed by fitting multiple linear regression models. Basic model-fitting techniques for variable selection, goodness-of-fit assessment, and regression diagnostics were used in our regression analyses to ensure the quality of the results. In stepwise variable selection, all of the univariate significant and non-significant covariates were considered, and both the significance levels for entry and for stay were set to 0.15 or larger. The goodness-of-fit measure, the coefficient of determination (R2), was computed for all of the linear regression models, which is the square of the correlation between the observed response variable and the predicted value. It had a value between 0 and 1, with a larger value indicating a better fit of the multiple linear regression model to the observed continuous data. In addition, the variance inflation factor was examined to detect potential multicollinearity problems (defined as a value \u2265 10).", 
    "endDate": "2008-03-01T00:00:00Z", 
    "id": "sg:clinicaltrial.NCT02092337", 
    "keywords": [
      "speech training", 
      "speech recognition", 
      "hearing aid", 
      "hearing loss", 
      "cochlear implant user", 
      "facility", 
      "skill", 
      "home", 
      "and young adult", 
      "user", 
      "profound hearing loss", 
      "introduction", 
      "sensorineural hearing loss", 
      "disability", 
      "people worldwide", 
      "prevalence", 
      "life expectancy", 
      "significant negative impact", 
      "life", 
      "child", 
      "certain disease", 
      "sudden hearing loss", 
      "endolymphatic hydrops", 
      "medication", 
      "General Surgery", 
      "patient", 
      "cochlear implantation", 
      "hearing", 
      "individual", 
      "Weight and Measure", 
      "communication problem", 
      "first step", 
      "communication", 
      "important skill", 
      "comprehension", 
      "successful communication", 
      "amplification", 
      "rehabilitation", 
      "old subject", 
      "nonexistent", 
      "everyone", 
      "method", 
      "clinician", 
      "speech pathologist", 
      "shortage", 
      "impaired patient", 
      "improvement", 
      "rehabilitative training", 
      "technological advance", 
      "Microcomputer", 
      "normal hearing", 
      "hearing-impaired listener", 
      "training material", 
      "computer", 
      "word recognition", 
      "talker", 
      "same level", 
      "normal-hearing listener", 
      "improved performance", 
      "training intervention", 
      "component", 
      "rehabilitation program", 
      "adult", 
      "speech perception", 
      "learner", 
      "recording", 
      "teacher", 
      "technique", 
      "computer-based training", 
      "development", 
      "limited training", 
      "training session", 
      "minimal change", 
      "perceptual ability", 
      "Dawson", 
      "Clark", 
      "limited success", 
      "intensive training", 
      "discrimination", 
      "recognition", 
      "speech", 
      "FUS", 
      "training system", 
      "fuller", 
      "House Ear Institute", 
      "large database", 
      "score", 
      "progress", 
      "moderate amount", 
      "significant improvement", 
      "vowel", 
      "Wu", 
      "Mandarin", 
      "half", 
      "period", 
      "consonant", 
      "tone", 
      "auditory training", 
      "perception", 
      "noise", 
      "distortion", 
      "previous study", 
      "positive impact", 
      "Adolescent", 
      "young adult", 
      "benefit", 
      "similar age", 
      "control group", 
      "inclusion criterion", 
      "study subject", 
      "control", 
      "age", 
      "basic", 
      "motivation", 
      "Education", 
      "exclusion criterion", 
      "sound field", 
      "basic information", 
      "questionnaire", 
      "specific situation", 
      "scaling", 
      "difference", 
      "total score", 
      "medical instrument", 
      "speaker", 
      "laptop computer", 
      "GSI", 
      "USA", 
      "output", 
      "testing procedure", 
      "Sound", 
      "token", 
      "different set", 
      "open set", 
      "character", 
      "list", 
      "closed set", 
      "falling", 
      "baseline data", 
      "training group", 
      "lab", 
      "training tool", 
      "software", 
      "targeted training", 
      "large amount", 
      "pure tone", 
      "speaker recognition", 
      "sentence", 
      "category", 
      "sensory feedback", 
      "high level", 
      "feature", 
      "discrimination task", 
      "training procedure", 
      "website", 
      "password", 
      "total time", 
      "training time", 
      "exercise", 
      "required amount", 
      "family", 
      "statistical method", 
      "statistical analysis", 
      "version", 
      "Institute", 
      "NC", 
      "U.S.A.", 
      "p-values", 
      "continuous data", 
      "mean", 
      "standard deviation", 
      "categorical variable", 
      "t-tests", 
      "rank", 
      "Fisher's", 
      "proportion", 
      "linear regression model", 
      "equation", 
      "first order", 
      "AR", 
      "repeated measure data", 
      "standard error", 
      "multiple linear regression", 
      "basic model", 
      "variable selection", 
      "goodness-of-fit", 
      "regression analysis", 
      "univariate", 
      "covariates", 
      "significance level", 
      "entry", 
      "coefficient", 
      "determination", 
      "R2", 
      "square", 
      "correlation", 
      "observed response", 
      "predicted value", 
      "large value", 
      "variance"
    ], 
    "name": "Effect of Computer-Assisted Speech Training on Speech Recognition and Subjective Benefits for Hearing Aid Users With Severe to Profound Prelingual Hearing Loss", 
    "sameAs": [
      "https://app.dimensions.ai/details/clinical_trial/NCT02092337"
    ], 
    "sdDataset": "clinical_trials", 
    "sdDatePublished": "2019-03-07T15:25", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "file:///pack/app/us_ct_data_00016.json", 
    "sponsor": [
      {
        "id": "https://www.grid.ac/institutes/grid.412094.a", 
        "type": "Organization"
      }
    ], 
    "startDate": "2007-06-01T00:00:00Z", 
    "subjectOf": [
      {
        "id": "https://doi.org/10.1159/000103211", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1001412741"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1044/jshr.3401.202", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1024927229"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s10162-005-5061-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1026518491", 
          "https://doi.org/10.1007/s10162-005-5061-6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1179/cim.2004.5.supplement-1.84", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1032376614"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.3109/00206090009073061", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1033315225"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.3109/03005369109076601", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1041090502"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1097/01.aud.0000215980.21158.a2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042923047"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1044/1092-4388(2003/011)", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1045660773"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1097/00003446-199712000-00007", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1060177552"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1121/1.1537708", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1062267933"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1121/1.2713668", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1062313682"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1121/1.420139", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1062371408"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://app.dimensions.ai/details/publication/pub.1082864286", 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://app.dimensions.ai/details/publication/pub.1083051545", 
        "type": "CreativeWork"
      }
    ], 
    "type": "MedicalStudy", 
    "url": "https://clinicaltrials.gov/show/NCT02092337"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/clinicaltrial.NCT02092337'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/clinicaltrial.NCT02092337'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/clinicaltrial.NCT02092337'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/clinicaltrial.NCT02092337'


 

This table displays all metadata directly associated to this object as RDF triples.

266 TRIPLES      16 PREDICATES      231 URIs      209 LITERALS      1 BLANK NODES

Subject Predicate Object
1 sg:clinicaltrial.NCT02092337 schema:about anzsrc-for:2746
2 anzsrc-for:3053
3 anzsrc-for:3468
4 schema:description Computer-assisted speech training is a speech recognition training system developed for cochlear implant users. With minimal facilities and skills, cochlear implant users can conduct this training at home. The purpose of this study was to apply this system to adolescent and young adult hearing aid users with prelingual severe to profound hearing loss. Detailed Description Introduction Sensorineural hearing loss (SNHL) is a disability affecting people worldwide, and the prevalence is expected to increase due to prolonged life expectancy. SNHL has a significant negative impact on the quality of life, especially in prelingually deafened children. Except for certain diseases such as sudden deafness or endolymphatic hydrops, which may be treated or alleviated by medication or surgery, most patients with SNHL have to wear hearing aids or undergo cochlear implantation to regain hearing. However, for many individuals these measures do not satisfactorily resolve communication problems, because hearing is only the first step in a series of events leading to communication. Between hearing and communication lie the important skills of listening and comprehension, and to achieve successful communication it has been suggested that patients receiving amplification should be offered some type of audiological rehabilitation. It has been reported that older subjects do not spontaneously acclimatize to wearing a hearing aid, or that the effects are either small or nonexistent, which emphasizes the importance of rehabilitation after wearing a hearing aid. Unfortunately, not everyone with SNHL in Taiwan receives this kind of rehabilitation. The reasons for this may be: (a) methods of rehabilitation are not familiar to all clinicians or speech pathologists; (b) there is a shortage of clinicians or speech pathologists to provide such time-consuming rehabilitation; (c) hearing impaired patients may be unable to afford or are unwilling to dedicate time to rehabilitation; and (d) it is difficult to measure the improvements provided by rehabilitation. Recently, rehabilitative training procedures have been garnering interest due to technological advances enabling a hearing aid user to perform the procedures while at home using a personal computer. Burk et al trained young normal-hearing and older hearing-impaired listeners with digitally recorded training materials using a computer. The results showed that older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker, and to some degree achieve the same level as young normal-hearing listeners. In addition, the improved performance was maintained across talkers and across time. The computer-aided speechreading training (CAST) system was developed to simulate a face-to-face training intervention and was designed to be one component of a comprehensive aural rehabilitation program for preretirement adults with acquired mild-to-moderate hearing loss. The aim of the training was to enhance speechreading skills to complement auditory speech perception. Throughout the training, the learner views a monitor that shows either a computer-generated screen or a videotaped recording of the teacher. CAST was designed to be used by a clinician to extend rather than to replace existing rehabilitative techniques. Computer-based training has also been applied to the rehabilitation of cochlear implant users. Before the development of computer-based training, some studies assessed the effects of limited training on the speech-recognition skills of poorer-performing cochlear implant users. Busby et al conducted ten 1-hour speech perception and production training sessions, and the results demonstrated minimal changes in perceptual abilities in three cochlear implant users. Dawson and Clark conducted one 50-minute training session per week for 10 weeks, and four of five subjects showed some measure of improvement. The limited success of these attempts to improve the speech-recognition abilities of cochlear implant users was thought to be due to an inadequate amount of training. More intensive training of cochlear implant users was predicted to be effective, because in normal hearing populations training has been shown to successfully improve speech segment discrimination and identification, and recognition on spectrally shifted speech. Fu et al reported encouraging results in the rehabilitation of cochlear implant users using a computer-assisted speech training system which they also called CAST, although this was different to the CAST system of Pichora-Fuller and Benguerel. The CAST system of Fu et al, developed at the House Ear Institute, contains a large database of training materials and can be installed on personal computers, and so with minimal facilities and skills, cochlear implant users can conduct the training at home, and clinicians or speech pathologists can monitor the subject's test score and training progress. The results demonstrated that after moderate amounts of training (1 hour per day, 5 days per week), all 10 postlingually deafened adult cochlear implant users in the study had significant improvements in vowel and consonant-recognition scores. Wu et al applied the CAST system to 10 Mandarin-speaking children (three hearing aid users and seven cochlear implant users). After training for half an hour a day, 5 days a week, for a period of 10 weeks, the subjects showed significant improvements in vowel, consonant and Chinese tone performance. This improved performance was largely retained for 2 months after the training had been completed. Stacey and Summerfield also used computer-based auditory training to improve the perception of noise. The results confirmed that the training helped to overcome the effects of spectral distortions in speech, and the training materials were most effective when several talkers were included. Based on these previous studies, cochlear implant users can improve their speech recognition ability after training with a CAST system. If this system is also effective for hearing aid users, and especially prelingually deafened patients, the CAST system will have a substantially positive impact, as there are many more hearing aid users than cochlear implant users. The purpose of this study was to train prelingually deafened adolescents and young adults with CAST and measure the benefits objectively and subjectively. The objective benefits were measured using published speech recognition tests [13], and the subjective benefits were measured using client-oriented scale of improvement (COSI). Materials and Methods Subjects Fifteen hearing aid users with prelingual severe to profound hearing loss participated in this study. Another six hearing aid users with a similar age and hearing average were included as the control group. The inclusion criteria for the study subjects and controls were: (1) age above 15 years; (2) wearing a hearing aid for at least for 2 years after hearing loss was diagnosed; (3) basic ability to operate a computer; (4) Mandarin Chinese speaker; and (5) motivation to undertake the training program. The exclusion criteria were: (1) aided hearing average worse than 70 dBHL; (2) unable to operate a computer. Before training with CAST, all participants received unaided and aided sound field audiometry. Table 1 shows the basic information of the 21 participants. Client-oriented scale of improvement (COSI) We use a COSI questionnaire to evaluate subjective benefits. Before training with the CAST system, both the training and control groups were asked to identify up five specific situations in which they would like to cope better. At the end of the training, for each situation they were asked (A) how much better (or worse) they could now hear, and (B) how well they were now able to cope. For scaling purposes, the responses were assigned scores from 1 to 5, with 5 corresponding to "much better" and "almost always", 4 corresponding to "better" and "most of the time", 3 corresponding to "slightly better" and "half the time", 2 corresponding to "no difference" and "occasionally", and 1 corresponding to "worse" and "hardly ever", for questions A and B, respectively. Question A was defined as an "improvement", and question B was defined as "final ability". The total scores of the five situations were compared between the training and control groups. Test materials and procedures The speech recognition test materials including monosyllabic words, disyllabic spondee words, vowels, consonants and Chinese tone recognition tests were recorded onto a CD-ROM at Melody Medical Instruments Corp. by a male and female speaker. The test materials were displayed on a laptop computer connected to a GSI 61TM clinical audiometer (Grason-Stadler, USA) at an output level of 70 dBHL. The testing procedure was performed in a double-walled, sound-treated room. Monosyllabic Chinese word recognition test materials included four blocks of 25 Chinese words. For each speech recognition test, 50 words were selected resulting in a set of 50 tokens. After a monosyllabic Chinese word was displayed, the participants were asked to write down the word. Four different sets of open-set tests were generated for each speech recognition test. Disyllabic Chinese spondee-word recognition test materials included two blocks of Chinese spondee-words, each block containing 36 Chinese spondee-words. For each speech recognition test, one block was selected resulting in a set of 36 tokens. After a Chinese spondee-word was displayed, the participants were asked to write down the word. Four different sets of open-set test were generated via changing the order of the materials for each speech recognition test. Vowel recognition test materials included 16 Chinese words. Vowel recognition was measured using a 4-alternative, forced-choice procedure in which Chinese characters were shown on the choice list. For each speech recognition test, the order of the words was changed. Thus, four different sets of closed-set tests were generated. Consonant recognition test materials included 21 Chinese words. Consonant recognition was measured using a 4-alternative, forced-choice procedure in which a Chinese character was shown on the choice list. For each speech recognition test, the order of the words was changed, and thus four different sets of closed-set tests were generated. Chinese tone recognition test materials included 50 Mandarin Chinese words. The participants were asked to write down the Chinese tone (tone: 1: flat; 2: rising; 3: falling-rising; 4: falling) after the Chinese word was displayed. For each speech recognition test, the order of the words was changed, and thus four different sets of open-set tests were generated. Before training, both groups underwent a series of speech recognition tests as baseline data. The training group then started training whereas the control group did not receive any training. Every 4 weeks, the participants returned to the lab for another series of speech recognition tests using different test materials. Every participant had received a total of four speech recognition tests by the end of the study. Training tools and procedures CAST software developed at the House Ear Institute and distributed by Melody Medical Instrument Corp. was used as the training tool. The training group was instructed to train at home following the program for at least 1 hour per day, 3 days a week, for 12 successive weeks. The control group did not receive any training and returned to the lab every 4 weeks for speech recognition tests. For each participant in the training group, a baseline speech recognition test was performed after the software had been installed into his or her personal computer. The results were analyzed by the software which then automatically generated a targeted training program. The software contained a large amount of information including pure tone, vowel recognition, consonant recognition, tone recognition, speaker recognition, environmental sounds, occasional words and occasional sentences. The subjects were asked to focus on pure tone, vowel recognition, consonant recognition and tone recognition training. The subjects started the training at a level generated by the computer software. There were usually five levels of difficulty in each training category, and each level consisted of several training sessions. For pure tone recognition training, the subjects were asked to choose the sound different to the others. Visual feedback was provided as to whether the response was correct or incorrect. After a training session had been completed, the score was calculated. If the score exceeded 80, the training proceeded to a higher level. If the score did not exceed 80, the training session was repeated until the score exceeded 80. At a higher level of training sessions, the differences between speech features in the response choices were reduced. For vowel recognition training, the subjects were asked to choose the vowel different to the others. After the subjects had progressed beyond the 3-alternative forced-choice discrimination task, they were trained to identify final vowels. Similar training procedures were used for consonant and tone recognition training. Each subject in the training group was asked to register on the Melody Medical Instrument Corp. website, and his or her username and password were provided to us. Therefore, we were able to monitor the total time spent training, and the training time and score for each exercise. If the subjects did not reach the required amount of time and training sessions, we contacted their family and encouraged them to do more training. Statistical methods All statistical analyses were performed with SAS software (Version 9.1.3, SAS Institute Inc., Cary, NC, U.S.A.) and R software (Version 2.7). Two-sided p values of 0.05 or less were considered to be statistically significant. Continuous data were expressed as mean ± standard deviation (SD) unless otherwise specified. Percentages were calculated for categorical variables. Two-sample t tests or Wilcoxon rank-sum tests were used to compare the means or medians of continuous data between two groups, whereas the chi-squared test or Fisher's exact test was used to analyze categorical proportions between two groups. In addition to univariate analyses, the data of the five speech recognition tests were analyzed by fitting multiple marginal linear regression models using generalized estimating equations. If the first-order autocorrelation (i.e., AR(1)) structure fit the repeated measures data well, the model-based standard error estimates were used in the generalized estimating equations analysis; otherwise, the empirical standard error estimates were reported. In addition, the data of COSI were analyzed by fitting multiple linear regression models. Basic model-fitting techniques for variable selection, goodness-of-fit assessment, and regression diagnostics were used in our regression analyses to ensure the quality of the results. In stepwise variable selection, all of the univariate significant and non-significant covariates were considered, and both the significance levels for entry and for stay were set to 0.15 or larger. The goodness-of-fit measure, the coefficient of determination (R2), was computed for all of the linear regression models, which is the square of the correlation between the observed response variable and the predicted value. It had a value between 0 and 1, with a larger value indicating a better fit of the multiple linear regression model to the observed continuous data. In addition, the variance inflation factor was examined to detect potential multicollinearity problems (defined as a value ≥ 10).
5 schema:endDate 2008-03-01T00:00:00Z
6 schema:keywords AR
7 Adolescent
8 Clark
9 Dawson
10 Education
11 FUS
12 Fisher's
13 GSI
14 General Surgery
15 House Ear Institute
16 Institute
17 Mandarin
18 Microcomputer
19 NC
20 R2
21 Sound
22 U.S.A.
23 USA
24 Weight and Measure
25 Wu
26 adult
27 age
28 amplification
29 and young adult
30 auditory training
31 baseline data
32 basic
33 basic information
34 basic model
35 benefit
36 categorical variable
37 category
38 certain disease
39 character
40 child
41 clinician
42 closed set
43 cochlear implant user
44 cochlear implantation
45 coefficient
46 communication
47 communication problem
48 component
49 comprehension
50 computer
51 computer-based training
52 consonant
53 continuous data
54 control
55 control group
56 correlation
57 covariates
58 determination
59 development
60 difference
61 different set
62 disability
63 discrimination
64 discrimination task
65 distortion
66 endolymphatic hydrops
67 entry
68 equation
69 everyone
70 exclusion criterion
71 exercise
72 facility
73 falling
74 family
75 feature
76 first order
77 first step
78 fuller
79 goodness-of-fit
80 half
81 hearing
82 hearing aid
83 hearing loss
84 hearing-impaired listener
85 high level
86 home
87 impaired patient
88 important skill
89 improved performance
90 improvement
91 inclusion criterion
92 individual
93 intensive training
94 introduction
95 lab
96 laptop computer
97 large amount
98 large database
99 large value
100 learner
101 life
102 life expectancy
103 limited success
104 limited training
105 linear regression model
106 list
107 mean
108 medical instrument
109 medication
110 method
111 minimal change
112 moderate amount
113 motivation
114 multiple linear regression
115 noise
116 nonexistent
117 normal hearing
118 normal-hearing listener
119 observed response
120 old subject
121 open set
122 output
123 p-values
124 password
125 patient
126 people worldwide
127 perception
128 perceptual ability
129 period
130 positive impact
131 predicted value
132 prevalence
133 previous study
134 profound hearing loss
135 progress
136 proportion
137 pure tone
138 questionnaire
139 rank
140 recognition
141 recording
142 regression analysis
143 rehabilitation
144 rehabilitation program
145 rehabilitative training
146 repeated measure data
147 required amount
148 same level
149 scaling
150 score
151 sensorineural hearing loss
152 sensory feedback
153 sentence
154 shortage
155 significance level
156 significant improvement
157 significant negative impact
158 similar age
159 skill
160 software
161 sound field
162 speaker
163 speaker recognition
164 specific situation
165 speech
166 speech pathologist
167 speech perception
168 speech recognition
169 speech training
170 square
171 standard deviation
172 standard error
173 statistical analysis
174 statistical method
175 study subject
176 successful communication
177 sudden hearing loss
178 t-tests
179 talker
180 targeted training
181 teacher
182 technique
183 technological advance
184 testing procedure
185 token
186 tone
187 total score
188 total time
189 training group
190 training intervention
191 training material
192 training procedure
193 training session
194 training system
195 training time
196 training tool
197 univariate
198 user
199 variable selection
200 variance
201 version
202 vowel
203 website
204 word recognition
205 young adult
206 schema:name Effect of Computer-Assisted Speech Training on Speech Recognition and Subjective Benefits for Hearing Aid Users With Severe to Profound Prelingual Hearing Loss
207 schema:sameAs https://app.dimensions.ai/details/clinical_trial/NCT02092337
208 schema:sdDatePublished 2019-03-07T15:25
209 schema:sdLicense https://scigraph.springernature.com/explorer/license/
210 schema:sdPublisher N9ad91c737e464e69ac752be7fac75864
211 schema:sponsor https://www.grid.ac/institutes/grid.412094.a
212 schema:startDate 2007-06-01T00:00:00Z
213 schema:subjectOf sg:pub.10.1007/s10162-005-5061-6
214 https://app.dimensions.ai/details/publication/pub.1082864286
215 https://app.dimensions.ai/details/publication/pub.1083051545
216 https://doi.org/10.1044/1092-4388(2003/011)
217 https://doi.org/10.1044/jshr.3401.202
218 https://doi.org/10.1097/00003446-199712000-00007
219 https://doi.org/10.1097/01.aud.0000215980.21158.a2
220 https://doi.org/10.1121/1.1537708
221 https://doi.org/10.1121/1.2713668
222 https://doi.org/10.1121/1.420139
223 https://doi.org/10.1159/000103211
224 https://doi.org/10.1179/cim.2004.5.supplement-1.84
225 https://doi.org/10.3109/00206090009073061
226 https://doi.org/10.3109/03005369109076601
227 schema:url https://clinicaltrials.gov/show/NCT02092337
228 sgo:license sg:explorer/license/
229 sgo:sdDataset clinical_trials
230 rdf:type schema:MedicalStudy
231 N9ad91c737e464e69ac752be7fac75864 schema:name Springer Nature - SN SciGraph project
232 rdf:type schema:Organization
233 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
234 rdf:type schema:DefinedTerm
235 anzsrc-for:3053 schema:inDefinedTermSet anzsrc-for:
236 rdf:type schema:DefinedTerm
237 anzsrc-for:3468 schema:inDefinedTermSet anzsrc-for:
238 rdf:type schema:DefinedTerm
239 sg:pub.10.1007/s10162-005-5061-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026518491
240 https://doi.org/10.1007/s10162-005-5061-6
241 rdf:type schema:CreativeWork
242 https://app.dimensions.ai/details/publication/pub.1082864286 schema:CreativeWork
243 https://app.dimensions.ai/details/publication/pub.1083051545 schema:CreativeWork
244 https://doi.org/10.1044/1092-4388(2003/011) schema:sameAs https://app.dimensions.ai/details/publication/pub.1045660773
245 rdf:type schema:CreativeWork
246 https://doi.org/10.1044/jshr.3401.202 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024927229
247 rdf:type schema:CreativeWork
248 https://doi.org/10.1097/00003446-199712000-00007 schema:sameAs https://app.dimensions.ai/details/publication/pub.1060177552
249 rdf:type schema:CreativeWork
250 https://doi.org/10.1097/01.aud.0000215980.21158.a2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042923047
251 rdf:type schema:CreativeWork
252 https://doi.org/10.1121/1.1537708 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062267933
253 rdf:type schema:CreativeWork
254 https://doi.org/10.1121/1.2713668 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062313682
255 rdf:type schema:CreativeWork
256 https://doi.org/10.1121/1.420139 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062371408
257 rdf:type schema:CreativeWork
258 https://doi.org/10.1159/000103211 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001412741
259 rdf:type schema:CreativeWork
260 https://doi.org/10.1179/cim.2004.5.supplement-1.84 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032376614
261 rdf:type schema:CreativeWork
262 https://doi.org/10.3109/00206090009073061 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033315225
263 rdf:type schema:CreativeWork
264 https://doi.org/10.3109/03005369109076601 schema:sameAs https://app.dimensions.ai/details/publication/pub.1041090502
265 rdf:type schema:CreativeWork
266 https://www.grid.ac/institutes/grid.412094.a schema:Organization
 




Preview window. Press ESC to close (or click here)


...