A depth perception evaluation metric for immersive user experience towards 3D multimedia services View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-01-14

AUTHORS

Huseyin Bayrak, Gokce Nur Yilmaz

ABSTRACT

The interest of users towards three-dimensional (3D) video is gaining momentum due to the recent breakthroughs in 3D video entertainment, education, network, etc. technologies. In order to speed up the advancement of these technologies, monitoring quality of experience of the 3D video, which focuses on end user’s point of view rather than service-oriented provisions, becomes a central concept among the researchers. Thanks to the stereoscopic viewing ability of human visual system (HVS), the depth perception evaluation of the 3D video can be considered as one of the most critical parts of this central concept. Due to the lack of efficiently and widely utilized objective metrics in literature, the depth perception assessment can currently only be ensured by cost and time-wise troublesome subjective measurements. Therefore, a no-reference objective metric, which is highly effective especially for on the fly depth perception assessment, is developed in this paper. Three proposed algorithms (i.e., Z direction motion, structural average depth and depth deviation) significant for the HVS to perceive the depth of the 3D video are integrated together while developing the proposed metric. Considering the outcomes of the proposed metric, it can be clearly stated that the provision of better 3D video experience to the end users can be accelerated in a timely fashion for the Future Internet multimedia services. More... »

PAGES

1-9

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s00530-018-00602-8

DOI

http://dx.doi.org/10.1007/s00530-018-00602-8

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1111409368


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "K\u0131r\u0131kkale University", 
          "id": "https://www.grid.ac/institutes/grid.411047.7", 
          "name": [
            "Electrical and Electronics Engineering Department, Kirikkale University, Yahsihan, Kirikkale, Turkey"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Bayrak", 
        "givenName": "Huseyin", 
        "id": "sg:person.07606057171.05", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07606057171.05"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "K\u0131r\u0131kkale University", 
          "id": "https://www.grid.ac/institutes/grid.411047.7", 
          "name": [
            "Electrical and Electronics Engineering Department, Kirikkale University, Yahsihan, Kirikkale, Turkey"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Nur Yilmaz", 
        "givenName": "Gokce", 
        "id": "sg:person.012150043303.44", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012150043303.44"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/s0923-5965(03)00076-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1004214649"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0923-5965(03)00076-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1004214649"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.image.2016.07.003", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1007903474"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0042-6989(79)90004-x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1010629894"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0042-6989(79)90004-x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1010629894"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11042-014-1945-y", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1013811781", 
          "https://doi.org/10.1007/s11042-014-1945-y"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11042-015-3011-9", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1020557742", 
          "https://doi.org/10.1007/s11042-015-3011-9"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.jvcir.2013.12.009", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1041861821"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/b978-012240530-3/50005-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042292192"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11042-015-3172-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1043869781", 
          "https://doi.org/10.1007/s11042-015-3172-6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11042-015-3172-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1043869781", 
          "https://doi.org/10.1007/s11042-015-3172-6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1049/el:20080522", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1056798056"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/jstsp.2009.2014805", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061337856"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tbc.2004.834028", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061521600"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tcsvt.2015.2430711", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061576550"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.5594/j18516", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1073034368"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2017.2766780", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1092430844"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2019-01-14", 
    "datePublishedReg": "2019-01-14", 
    "description": "The interest of users towards three-dimensional (3D) video is gaining momentum due to the recent breakthroughs in 3D video entertainment, education, network, etc. technologies. In order to speed up the advancement of these technologies, monitoring quality of experience of the 3D video, which focuses on end user\u2019s point of view rather than service-oriented provisions, becomes a central concept among the researchers. Thanks to the stereoscopic viewing ability of human visual system (HVS), the depth perception evaluation of the 3D video can be considered as one of the most critical parts of this central concept. Due to the lack of efficiently and widely utilized objective metrics in literature, the depth perception assessment can currently only be ensured by cost and time-wise troublesome subjective measurements. Therefore, a no-reference objective metric, which is highly effective especially for on the fly depth perception assessment, is developed in this paper. Three proposed algorithms (i.e., Z direction motion, structural average depth and depth deviation) significant for the HVS to perceive the depth of the 3D video are integrated together while developing the proposed metric. Considering the outcomes of the proposed metric, it can be clearly stated that the provision of better 3D video experience to the end users can be accelerated in a timely fashion for the Future Internet multimedia services.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1007/s00530-018-00602-8", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1284647", 
        "issn": [
          "0942-4962", 
          "1432-1882"
        ], 
        "name": "Multimedia Systems", 
        "type": "Periodical"
      }
    ], 
    "name": "A depth perception evaluation metric for immersive user experience towards 3D multimedia services", 
    "pagination": "1-9", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "affc6c9454da14797d152f97c3a1e96e8777e952e26083004e1d6a69e13f202d"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s00530-018-00602-8"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1111409368"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s00530-018-00602-8", 
      "https://app.dimensions.ai/details/publication/pub.1111409368"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-11T08:40", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000319_0000000319/records_11231_00000000.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://link.springer.com/10.1007%2Fs00530-018-00602-8"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00530-018-00602-8'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00530-018-00602-8'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00530-018-00602-8'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00530-018-00602-8'


 

This table displays all metadata directly associated to this object as RDF triples.

107 TRIPLES      21 PREDICATES      38 URIs      16 LITERALS      5 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s00530-018-00602-8 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Ndd9cc649981745e1a1abc039cb48e6af
4 schema:citation sg:pub.10.1007/s11042-014-1945-y
5 sg:pub.10.1007/s11042-015-3011-9
6 sg:pub.10.1007/s11042-015-3172-6
7 https://doi.org/10.1016/0042-6989(79)90004-x
8 https://doi.org/10.1016/b978-012240530-3/50005-5
9 https://doi.org/10.1016/j.image.2016.07.003
10 https://doi.org/10.1016/j.jvcir.2013.12.009
11 https://doi.org/10.1016/s0923-5965(03)00076-6
12 https://doi.org/10.1049/el:20080522
13 https://doi.org/10.1109/jstsp.2009.2014805
14 https://doi.org/10.1109/tbc.2004.834028
15 https://doi.org/10.1109/tcsvt.2015.2430711
16 https://doi.org/10.1109/tip.2017.2766780
17 https://doi.org/10.5594/j18516
18 schema:datePublished 2019-01-14
19 schema:datePublishedReg 2019-01-14
20 schema:description The interest of users towards three-dimensional (3D) video is gaining momentum due to the recent breakthroughs in 3D video entertainment, education, network, etc. technologies. In order to speed up the advancement of these technologies, monitoring quality of experience of the 3D video, which focuses on end user’s point of view rather than service-oriented provisions, becomes a central concept among the researchers. Thanks to the stereoscopic viewing ability of human visual system (HVS), the depth perception evaluation of the 3D video can be considered as one of the most critical parts of this central concept. Due to the lack of efficiently and widely utilized objective metrics in literature, the depth perception assessment can currently only be ensured by cost and time-wise troublesome subjective measurements. Therefore, a no-reference objective metric, which is highly effective especially for on the fly depth perception assessment, is developed in this paper. Three proposed algorithms (i.e., Z direction motion, structural average depth and depth deviation) significant for the HVS to perceive the depth of the 3D video are integrated together while developing the proposed metric. Considering the outcomes of the proposed metric, it can be clearly stated that the provision of better 3D video experience to the end users can be accelerated in a timely fashion for the Future Internet multimedia services.
21 schema:genre research_article
22 schema:inLanguage en
23 schema:isAccessibleForFree false
24 schema:isPartOf sg:journal.1284647
25 schema:name A depth perception evaluation metric for immersive user experience towards 3D multimedia services
26 schema:pagination 1-9
27 schema:productId N1493f3896e3b47d1aa7c7e214a9e2b8a
28 N4db2fed3b6f14430ad71d7249ed570f9
29 N7c335aa202994988a78f2ad43ba284e5
30 schema:sameAs https://app.dimensions.ai/details/publication/pub.1111409368
31 https://doi.org/10.1007/s00530-018-00602-8
32 schema:sdDatePublished 2019-04-11T08:40
33 schema:sdLicense https://scigraph.springernature.com/explorer/license/
34 schema:sdPublisher N6f046aa0abfb48089049533a86d95bf1
35 schema:url https://link.springer.com/10.1007%2Fs00530-018-00602-8
36 sgo:license sg:explorer/license/
37 sgo:sdDataset articles
38 rdf:type schema:ScholarlyArticle
39 N1493f3896e3b47d1aa7c7e214a9e2b8a schema:name doi
40 schema:value 10.1007/s00530-018-00602-8
41 rdf:type schema:PropertyValue
42 N4db2fed3b6f14430ad71d7249ed570f9 schema:name dimensions_id
43 schema:value pub.1111409368
44 rdf:type schema:PropertyValue
45 N6878f770fa424ef0bbc5b19daa4a2a07 rdf:first sg:person.012150043303.44
46 rdf:rest rdf:nil
47 N6f046aa0abfb48089049533a86d95bf1 schema:name Springer Nature - SN SciGraph project
48 rdf:type schema:Organization
49 N7c335aa202994988a78f2ad43ba284e5 schema:name readcube_id
50 schema:value affc6c9454da14797d152f97c3a1e96e8777e952e26083004e1d6a69e13f202d
51 rdf:type schema:PropertyValue
52 Ndd9cc649981745e1a1abc039cb48e6af rdf:first sg:person.07606057171.05
53 rdf:rest N6878f770fa424ef0bbc5b19daa4a2a07
54 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
55 schema:name Information and Computing Sciences
56 rdf:type schema:DefinedTerm
57 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
58 schema:name Artificial Intelligence and Image Processing
59 rdf:type schema:DefinedTerm
60 sg:journal.1284647 schema:issn 0942-4962
61 1432-1882
62 schema:name Multimedia Systems
63 rdf:type schema:Periodical
64 sg:person.012150043303.44 schema:affiliation https://www.grid.ac/institutes/grid.411047.7
65 schema:familyName Nur Yilmaz
66 schema:givenName Gokce
67 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012150043303.44
68 rdf:type schema:Person
69 sg:person.07606057171.05 schema:affiliation https://www.grid.ac/institutes/grid.411047.7
70 schema:familyName Bayrak
71 schema:givenName Huseyin
72 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07606057171.05
73 rdf:type schema:Person
74 sg:pub.10.1007/s11042-014-1945-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1013811781
75 https://doi.org/10.1007/s11042-014-1945-y
76 rdf:type schema:CreativeWork
77 sg:pub.10.1007/s11042-015-3011-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020557742
78 https://doi.org/10.1007/s11042-015-3011-9
79 rdf:type schema:CreativeWork
80 sg:pub.10.1007/s11042-015-3172-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043869781
81 https://doi.org/10.1007/s11042-015-3172-6
82 rdf:type schema:CreativeWork
83 https://doi.org/10.1016/0042-6989(79)90004-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1010629894
84 rdf:type schema:CreativeWork
85 https://doi.org/10.1016/b978-012240530-3/50005-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042292192
86 rdf:type schema:CreativeWork
87 https://doi.org/10.1016/j.image.2016.07.003 schema:sameAs https://app.dimensions.ai/details/publication/pub.1007903474
88 rdf:type schema:CreativeWork
89 https://doi.org/10.1016/j.jvcir.2013.12.009 schema:sameAs https://app.dimensions.ai/details/publication/pub.1041861821
90 rdf:type schema:CreativeWork
91 https://doi.org/10.1016/s0923-5965(03)00076-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004214649
92 rdf:type schema:CreativeWork
93 https://doi.org/10.1049/el:20080522 schema:sameAs https://app.dimensions.ai/details/publication/pub.1056798056
94 rdf:type schema:CreativeWork
95 https://doi.org/10.1109/jstsp.2009.2014805 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061337856
96 rdf:type schema:CreativeWork
97 https://doi.org/10.1109/tbc.2004.834028 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061521600
98 rdf:type schema:CreativeWork
99 https://doi.org/10.1109/tcsvt.2015.2430711 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061576550
100 rdf:type schema:CreativeWork
101 https://doi.org/10.1109/tip.2017.2766780 schema:sameAs https://app.dimensions.ai/details/publication/pub.1092430844
102 rdf:type schema:CreativeWork
103 https://doi.org/10.5594/j18516 schema:sameAs https://app.dimensions.ai/details/publication/pub.1073034368
104 rdf:type schema:CreativeWork
105 https://www.grid.ac/institutes/grid.411047.7 schema:alternateName Kırıkkale University
106 schema:name Electrical and Electronics Engineering Department, Kirikkale University, Yahsihan, Kirikkale, Turkey
107 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...