Semantic Congruency Between Music and Video in Game Contents View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2019

AUTHORS

Natsuhiro Marumo , Yuji Tsutsui , Masashi Yamada

ABSTRACT

Emotional features have been illustrated by a two-dimensional model, which was spanned by valence and arousal axes, in the simplest way. In the present study, the correlations between semantic congruency and the emotional coincidences on the valence and arousal factors between music and videos were clarified, in the context of game contents. Participants rated the degree of congruency between music and videos. The semantic congruency was very high when the emotions of the music and video were coincided in both factors. In the cases where emotional feature of a musical piece did not coincide with a video in the valence or arousal factor, the congruency significantly decreased. When the emotions coincided neither factor, the congruency showed the lowest values. The results implied that both the valence and arousal factors in the emotional features were equally important for the semantic congruency between musical pieces and videos. More... »

PAGES

366-372

Book

TITLE

Advances in Human Factors in Wearable Technologies and Game Design

ISBN

978-3-319-94618-4
978-3-319-94619-1

Author Affiliations

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-319-94619-1_36

DOI

http://dx.doi.org/10.1007/978-3-319-94619-1_36

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1105083230


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology and Cognitive Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Kanazawa Institute of Technology", 
          "id": "https://www.grid.ac/institutes/grid.444537.5", 
          "name": [
            "Kanazawa Institute of Technology"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Marumo", 
        "givenName": "Natsuhiro", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Kanazawa Institute of Technology", 
          "id": "https://www.grid.ac/institutes/grid.444537.5", 
          "name": [
            "Kanazawa Institute of Technology"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Tsutsui", 
        "givenName": "Yuji", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Kanazawa Institute of Technology", 
          "id": "https://www.grid.ac/institutes/grid.444537.5", 
          "name": [
            "Kanazawa Institute of Technology"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Yamada", 
        "givenName": "Masashi", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1037/h0077714", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1026751541"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1037/h0094102", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1045576719"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1080/07494469300640421", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046985022"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2019", 
    "datePublishedReg": "2019-01-01", 
    "description": "Emotional features have been illustrated by a two-dimensional model, which was spanned by valence and arousal axes, in the simplest way. In the present study, the correlations between semantic congruency and the emotional coincidences on the valence and arousal factors between music and videos were clarified, in the context of game contents. Participants rated the degree of congruency between music and videos. The semantic congruency was very high when the emotions of the music and video were coincided in both factors. In the cases where emotional feature of a musical piece did not coincide with a video in the valence or arousal factor, the congruency significantly decreased. When the emotions coincided neither factor, the congruency showed the lowest values. The results implied that both the valence and arousal factors in the emotional features were equally important for the semantic congruency between musical pieces and videos.", 
    "editor": [
      {
        "familyName": "Ahram", 
        "givenName": "Tareq Z.", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-319-94619-1_36", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-319-94618-4", 
        "978-3-319-94619-1"
      ], 
      "name": "Advances in Human Factors in Wearable Technologies and Game Design", 
      "type": "Book"
    }, 
    "name": "Semantic Congruency Between Music and Video in Game Contents", 
    "pagination": "366-372", 
    "productId": [
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-319-94619-1_36"
        ]
      }, 
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "32d535e27a33f3a5ff882dac7d9eb32e5f6e263a0d79bc5902c5ba55e2c5666b"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1105083230"
        ]
      }
    ], 
    "publisher": {
      "location": "Cham", 
      "name": "Springer International Publishing", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-319-94619-1_36", 
      "https://app.dimensions.ai/details/publication/pub.1105083230"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2019-04-15T20:24", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8687_00000429.jsonl", 
    "type": "Chapter", 
    "url": "http://link.springer.com/10.1007/978-3-319-94619-1_36"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-94619-1_36'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-94619-1_36'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-94619-1_36'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-94619-1_36'


 

This table displays all metadata directly associated to this object as RDF triples.

85 TRIPLES      23 PREDICATES      30 URIs      20 LITERALS      8 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-319-94619-1_36 schema:about anzsrc-for:17
2 anzsrc-for:1701
3 schema:author N9b1842e4cb4b4a46ba06b4067d620379
4 schema:citation https://doi.org/10.1037/h0077714
5 https://doi.org/10.1037/h0094102
6 https://doi.org/10.1080/07494469300640421
7 schema:datePublished 2019
8 schema:datePublishedReg 2019-01-01
9 schema:description Emotional features have been illustrated by a two-dimensional model, which was spanned by valence and arousal axes, in the simplest way. In the present study, the correlations between semantic congruency and the emotional coincidences on the valence and arousal factors between music and videos were clarified, in the context of game contents. Participants rated the degree of congruency between music and videos. The semantic congruency was very high when the emotions of the music and video were coincided in both factors. In the cases where emotional feature of a musical piece did not coincide with a video in the valence or arousal factor, the congruency significantly decreased. When the emotions coincided neither factor, the congruency showed the lowest values. The results implied that both the valence and arousal factors in the emotional features were equally important for the semantic congruency between musical pieces and videos.
10 schema:editor N79af750e7522401ba6d52027948de0df
11 schema:genre chapter
12 schema:inLanguage en
13 schema:isAccessibleForFree false
14 schema:isPartOf N2308b7049e70425fa6d716f3fa77f185
15 schema:name Semantic Congruency Between Music and Video in Game Contents
16 schema:pagination 366-372
17 schema:productId N3b0b7ef71dda4ef48cc7dfd85ecf56ec
18 Neb4a33431eef49ef81be01d9f1b01870
19 Nf84444bd86c34e7b81de3cc1d34dee80
20 schema:publisher N48e7037afb864ad288ea2059803aa1ed
21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1105083230
22 https://doi.org/10.1007/978-3-319-94619-1_36
23 schema:sdDatePublished 2019-04-15T20:24
24 schema:sdLicense https://scigraph.springernature.com/explorer/license/
25 schema:sdPublisher Ncc5872e349a6428c9b11fb4b8f3867a3
26 schema:url http://link.springer.com/10.1007/978-3-319-94619-1_36
27 sgo:license sg:explorer/license/
28 sgo:sdDataset chapters
29 rdf:type schema:Chapter
30 N2308b7049e70425fa6d716f3fa77f185 schema:isbn 978-3-319-94618-4
31 978-3-319-94619-1
32 schema:name Advances in Human Factors in Wearable Technologies and Game Design
33 rdf:type schema:Book
34 N2bac9dc14bb6417c9b8e594c6a9324ef schema:affiliation https://www.grid.ac/institutes/grid.444537.5
35 schema:familyName Marumo
36 schema:givenName Natsuhiro
37 rdf:type schema:Person
38 N3b0b7ef71dda4ef48cc7dfd85ecf56ec schema:name readcube_id
39 schema:value 32d535e27a33f3a5ff882dac7d9eb32e5f6e263a0d79bc5902c5ba55e2c5666b
40 rdf:type schema:PropertyValue
41 N47f0022ef0294b4690c5cf9d600e97eb rdf:first N6d63b3e3663c46d393587708b60ec9e8
42 rdf:rest N670189b7d26741fa8d0cace9a6615c33
43 N48e7037afb864ad288ea2059803aa1ed schema:location Cham
44 schema:name Springer International Publishing
45 rdf:type schema:Organisation
46 N670189b7d26741fa8d0cace9a6615c33 rdf:first Ne842e75198b546109c3b688fb46b1b6b
47 rdf:rest rdf:nil
48 N6d63b3e3663c46d393587708b60ec9e8 schema:affiliation https://www.grid.ac/institutes/grid.444537.5
49 schema:familyName Tsutsui
50 schema:givenName Yuji
51 rdf:type schema:Person
52 N79af750e7522401ba6d52027948de0df rdf:first Na02c86461b8b426c878032d49f0164ae
53 rdf:rest rdf:nil
54 N9b1842e4cb4b4a46ba06b4067d620379 rdf:first N2bac9dc14bb6417c9b8e594c6a9324ef
55 rdf:rest N47f0022ef0294b4690c5cf9d600e97eb
56 Na02c86461b8b426c878032d49f0164ae schema:familyName Ahram
57 schema:givenName Tareq Z.
58 rdf:type schema:Person
59 Ncc5872e349a6428c9b11fb4b8f3867a3 schema:name Springer Nature - SN SciGraph project
60 rdf:type schema:Organization
61 Ne842e75198b546109c3b688fb46b1b6b schema:affiliation https://www.grid.ac/institutes/grid.444537.5
62 schema:familyName Yamada
63 schema:givenName Masashi
64 rdf:type schema:Person
65 Neb4a33431eef49ef81be01d9f1b01870 schema:name dimensions_id
66 schema:value pub.1105083230
67 rdf:type schema:PropertyValue
68 Nf84444bd86c34e7b81de3cc1d34dee80 schema:name doi
69 schema:value 10.1007/978-3-319-94619-1_36
70 rdf:type schema:PropertyValue
71 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
72 schema:name Psychology and Cognitive Sciences
73 rdf:type schema:DefinedTerm
74 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
75 schema:name Psychology
76 rdf:type schema:DefinedTerm
77 https://doi.org/10.1037/h0077714 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026751541
78 rdf:type schema:CreativeWork
79 https://doi.org/10.1037/h0094102 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045576719
80 rdf:type schema:CreativeWork
81 https://doi.org/10.1080/07494469300640421 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046985022
82 rdf:type schema:CreativeWork
83 https://www.grid.ac/institutes/grid.444537.5 schema:alternateName Kanazawa Institute of Technology
84 schema:name Kanazawa Institute of Technology
85 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...