A Multi-User 3-D Virtual Environment with Interactive Collaboration and Shared Whiteboard Technologies View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2003-05

AUTHORS

Wing Ho Leung, Tsuhan Chen

ABSTRACT

A multi-user 3-D virtual environment allows remote participants to have a transparent communication as if they are communicating face-to-face. The sense of presence in such an environment can be established by representing each participant with a vivid human-like character called an avatar. We review several immersive technologies, including directional sound, eye gaze, hand gestures, lip synchronization and facial expressions, that facilitates multimodal interaction among participants in the virtual environment using speech processing and animation techniques. Interactive collaboration can be further encouraged with the ability to share and manipulate 3-D objects in the virtual environment. A shared whiteboard makes it easy for participants in the virtual environment to convey their ideas graphically. We survey various kinds of capture devices used for providing the input for the shared whiteboard. Efficient storage of the whiteboard session and precise archival at a later time bring up interesting research topics in information retrieval. More... »

PAGES

7-23

References to SciGraph publications

  • 1992-09. Why do users like video? in COMPUTER SUPPORTED COOPERATIVE WORK (CSCW)
  • 1991. A Transformation Method for Modeling and Animation of the Human Face from Photographs in COMPUTER ANIMATION ’91
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1023/a:1023466231968

    DOI

    http://dx.doi.org/10.1023/a:1023466231968

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1004009904


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Carnegie Mellon University", 
              "id": "https://www.grid.ac/institutes/grid.147455.6", 
              "name": [
                "Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, 15213-3890, Pittsburgh, PA, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Leung", 
            "givenName": "Wing Ho", 
            "id": "sg:person.010572327435.12", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010572327435.12"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Carnegie Mellon University", 
              "id": "https://www.grid.ac/institutes/grid.147455.6", 
              "name": [
                "Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, 15213-3890, Pittsburgh, PA, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Tsuhan", 
            "id": "sg:person.012245072625.31", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012245072625.31"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.1145/37402.37405", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1000945007"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-4-431-66890-9_4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1050299710", 
              "https://doi.org/10.1007/978-4-431-66890-9_4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00752437", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052007502", 
              "https://doi.org/10.1007/bf00752437"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf00752437", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052007502", 
              "https://doi.org/10.1007/bf00752437"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2003-05", 
        "datePublishedReg": "2003-05-01", 
        "description": "A multi-user 3-D virtual environment allows remote participants to have a transparent communication as if they are communicating face-to-face. The sense of presence in such an environment can be established by representing each participant with a vivid human-like character called an avatar. We review several immersive technologies, including directional sound, eye gaze, hand gestures, lip synchronization and facial expressions, that facilitates multimodal interaction among participants in the virtual environment using speech processing and animation techniques. Interactive collaboration can be further encouraged with the ability to share and manipulate 3-D objects in the virtual environment. A shared whiteboard makes it easy for participants in the virtual environment to convey their ideas graphically. We survey various kinds of capture devices used for providing the input for the shared whiteboard. Efficient storage of the whiteboard session and precise archival at a later time bring up interesting research topics in information retrieval.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1023/a:1023466231968", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1044869", 
            "issn": [
              "1380-7501", 
              "1573-7721"
            ], 
            "name": "Multimedia Tools and Applications", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "20"
          }
        ], 
        "name": "A Multi-User 3-D Virtual Environment with Interactive Collaboration and Shared Whiteboard Technologies", 
        "pagination": "7-23", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "131abd6b177e09480ffd84b1201219bf4767594ce65041ed360baec152b41f8f"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1023/a:1023466231968"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1004009904"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1023/a:1023466231968", 
          "https://app.dimensions.ai/details/publication/pub.1004009904"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-10T18:18", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8675_00000503.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "http://link.springer.com/10.1023%2FA%3A1023466231968"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1023/a:1023466231968'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1023/a:1023466231968'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1023/a:1023466231968'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1023/a:1023466231968'


     

    This table displays all metadata directly associated to this object as RDF triples.

    79 TRIPLES      21 PREDICATES      30 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1023/a:1023466231968 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nb38a4e3fdfed4feea4d1b5520df652ad
    4 schema:citation sg:pub.10.1007/978-4-431-66890-9_4
    5 sg:pub.10.1007/bf00752437
    6 https://doi.org/10.1145/37402.37405
    7 schema:datePublished 2003-05
    8 schema:datePublishedReg 2003-05-01
    9 schema:description A multi-user 3-D virtual environment allows remote participants to have a transparent communication as if they are communicating face-to-face. The sense of presence in such an environment can be established by representing each participant with a vivid human-like character called an avatar. We review several immersive technologies, including directional sound, eye gaze, hand gestures, lip synchronization and facial expressions, that facilitates multimodal interaction among participants in the virtual environment using speech processing and animation techniques. Interactive collaboration can be further encouraged with the ability to share and manipulate 3-D objects in the virtual environment. A shared whiteboard makes it easy for participants in the virtual environment to convey their ideas graphically. We survey various kinds of capture devices used for providing the input for the shared whiteboard. Efficient storage of the whiteboard session and precise archival at a later time bring up interesting research topics in information retrieval.
    10 schema:genre research_article
    11 schema:inLanguage en
    12 schema:isAccessibleForFree false
    13 schema:isPartOf Ncbe300a716874b54b91950462ff656db
    14 Nd27a8f6633cd4fe0ba7f5512050f11fd
    15 sg:journal.1044869
    16 schema:name A Multi-User 3-D Virtual Environment with Interactive Collaboration and Shared Whiteboard Technologies
    17 schema:pagination 7-23
    18 schema:productId N9cd162a92bc644d8b503711b4e24e416
    19 Nb88129d6a29b4d8eb709a2435b1df2b3
    20 Nfe733f684d1b41579d28e686f0452246
    21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004009904
    22 https://doi.org/10.1023/a:1023466231968
    23 schema:sdDatePublished 2019-04-10T18:18
    24 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    25 schema:sdPublisher Nc81bd911355c46e5bbafed098d1a7573
    26 schema:url http://link.springer.com/10.1023%2FA%3A1023466231968
    27 sgo:license sg:explorer/license/
    28 sgo:sdDataset articles
    29 rdf:type schema:ScholarlyArticle
    30 N9cd162a92bc644d8b503711b4e24e416 schema:name readcube_id
    31 schema:value 131abd6b177e09480ffd84b1201219bf4767594ce65041ed360baec152b41f8f
    32 rdf:type schema:PropertyValue
    33 Nb38a4e3fdfed4feea4d1b5520df652ad rdf:first sg:person.010572327435.12
    34 rdf:rest Nc302cf1cc89d43af831bb4db336c5112
    35 Nb88129d6a29b4d8eb709a2435b1df2b3 schema:name dimensions_id
    36 schema:value pub.1004009904
    37 rdf:type schema:PropertyValue
    38 Nc302cf1cc89d43af831bb4db336c5112 rdf:first sg:person.012245072625.31
    39 rdf:rest rdf:nil
    40 Nc81bd911355c46e5bbafed098d1a7573 schema:name Springer Nature - SN SciGraph project
    41 rdf:type schema:Organization
    42 Ncbe300a716874b54b91950462ff656db schema:volumeNumber 20
    43 rdf:type schema:PublicationVolume
    44 Nd27a8f6633cd4fe0ba7f5512050f11fd schema:issueNumber 1
    45 rdf:type schema:PublicationIssue
    46 Nfe733f684d1b41579d28e686f0452246 schema:name doi
    47 schema:value 10.1023/a:1023466231968
    48 rdf:type schema:PropertyValue
    49 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    50 schema:name Information and Computing Sciences
    51 rdf:type schema:DefinedTerm
    52 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    53 schema:name Artificial Intelligence and Image Processing
    54 rdf:type schema:DefinedTerm
    55 sg:journal.1044869 schema:issn 1380-7501
    56 1573-7721
    57 schema:name Multimedia Tools and Applications
    58 rdf:type schema:Periodical
    59 sg:person.010572327435.12 schema:affiliation https://www.grid.ac/institutes/grid.147455.6
    60 schema:familyName Leung
    61 schema:givenName Wing Ho
    62 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010572327435.12
    63 rdf:type schema:Person
    64 sg:person.012245072625.31 schema:affiliation https://www.grid.ac/institutes/grid.147455.6
    65 schema:familyName Chen
    66 schema:givenName Tsuhan
    67 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012245072625.31
    68 rdf:type schema:Person
    69 sg:pub.10.1007/978-4-431-66890-9_4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050299710
    70 https://doi.org/10.1007/978-4-431-66890-9_4
    71 rdf:type schema:CreativeWork
    72 sg:pub.10.1007/bf00752437 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052007502
    73 https://doi.org/10.1007/bf00752437
    74 rdf:type schema:CreativeWork
    75 https://doi.org/10.1145/37402.37405 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000945007
    76 rdf:type schema:CreativeWork
    77 https://www.grid.ac/institutes/grid.147455.6 schema:alternateName Carnegie Mellon University
    78 schema:name Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, 15213-3890, Pittsburgh, PA, USA
    79 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...