Ontology type: schema:ScholarlyArticle
2018-11
AUTHORSOmar M. Fahmy, Gamal Fahmy, Mamdouh F. Fahmy
ABSTRACTMagnifying micro-movements of natural videos that are undetectable by human eye has recently received considerable interests, due to its impact in numerous applications. In this paper, we use dual tree complex wavelet transform (DT-CWT), to analyze video frames in order to detect and magnify micro-movements to make them visible. We use DT-CWT, due to its excellent edge-preserving and nearly-shift invariant features. In order to detect any minor change in object’s spatial position, the paper proposes to modify the phases of the CWT coefficients decomposition of successive video frames. Furthermore, the paper applies Radon transform to track frame micro-movements without any temporal band-pass filtering. The paper starts by presenting a simple technique to design orthogonal filters that construct this CWT system. Next, it is shown that modifying the phase differences between the CWT coefficients of arbitrary frame and a reference one results in image spatial magnification. This in turn, makes these micro-movements seen and observable. Several simulation results are given, to show that the proposed technique competes very well to the existing micro-magnification approaches. In fact, as it manages to yield superior video quality in far less computation time. More... »
PAGES1505-1512
http://scigraph.springernature.com/pub.10.1007/s11760-018-1306-9
DOIhttp://dx.doi.org/10.1007/s11760-018-1306-9
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1104265076
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "Future University in Egypt",
"id": "https://www.grid.ac/institutes/grid.440865.b",
"name": [
"Electrical Engineering Department, Future University in Egypt (FUE), Cairo, Egypt"
],
"type": "Organization"
},
"familyName": "Fahmy",
"givenName": "Omar M.",
"id": "sg:person.016044670113.34",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016044670113.34"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Prince Sattam Bin Abdulaziz University",
"id": "https://www.grid.ac/institutes/grid.449553.a",
"name": [
"Electrical Engineering Department, Prince Sattam Bin Abdulaziz University, Al-Saih, Saudi Arabia"
],
"type": "Organization"
},
"familyName": "Fahmy",
"givenName": "Gamal",
"id": "sg:person.010027671351.05",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010027671351.05"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Assiut University",
"id": "https://www.grid.ac/institutes/grid.252487.e",
"name": [
"Electrical Engineering Department, Assiut University in Egypt, Asy\u00fbt, Egypt"
],
"type": "Organization"
},
"familyName": "Fahmy",
"givenName": "Mamdouh F.",
"id": "sg:person.012376013410.91",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012376013410.91"
],
"type": "Person"
}
],
"citation": [
{
"id": "https://doi.org/10.1006/acha.2000.0343",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1014478159"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.sigpro.2005.09.024",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1023265858"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/2461912.2461966",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1033929155"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.cag.2010.05.017",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037492841"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/1073204.1073223",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1052005641"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/2185520.2185561",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1052876203"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/34.93808",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061157293"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/msp.2005.1550194",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061422415"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/tip.2008.926147",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061642146"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/1141911.1142010",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1063152004"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1364/oe.18.010762",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1065193557"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1049/iet-ipr.2017.0049",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1092015862"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/icip.1995.537667",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094903731"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/icip.2000.899397",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095494704"
],
"type": "CreativeWork"
}
],
"datePublished": "2018-11",
"datePublishedReg": "2018-11-01",
"description": "Magnifying micro-movements of natural videos that are undetectable by human eye has recently received considerable interests, due to its impact in numerous applications. In this paper, we use dual tree complex wavelet transform (DT-CWT), to analyze video frames in order to detect and magnify micro-movements to make them visible. We use DT-CWT, due to its excellent edge-preserving and nearly-shift invariant features. In order to detect any minor change in object\u2019s spatial position, the paper proposes to modify the phases of the CWT coefficients decomposition of successive video frames. Furthermore, the paper applies Radon transform to track frame micro-movements without any temporal band-pass filtering. The paper starts by presenting a simple technique to design orthogonal filters that construct this CWT system. Next, it is shown that modifying the phase differences between the CWT coefficients of arbitrary frame and a reference one results in image spatial magnification. This in turn, makes these micro-movements seen and observable. Several simulation results are given, to show that the proposed technique competes very well to the existing micro-magnification approaches. In fact, as it manages to yield superior video quality in far less computation time.",
"genre": "research_article",
"id": "sg:pub.10.1007/s11760-018-1306-9",
"inLanguage": [
"en"
],
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1050964",
"issn": [
"1863-1703",
"1863-1711"
],
"name": "Signal, Image and Video Processing",
"type": "Periodical"
},
{
"issueNumber": "8",
"type": "PublicationIssue"
},
{
"type": "PublicationVolume",
"volumeNumber": "12"
}
],
"name": "A new video magnification technique using complex wavelets with Radon transform application",
"pagination": "1505-1512",
"productId": [
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"712cb3cd9b507c16755f958a6617c8ef273b115d0440a5924f77e5efc7c61b27"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s11760-018-1306-9"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1104265076"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s11760-018-1306-9",
"https://app.dimensions.ai/details/publication/pub.1104265076"
],
"sdDataset": "articles",
"sdDatePublished": "2019-04-10T14:11",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8660_00000517.jsonl",
"type": "ScholarlyArticle",
"url": "http://link.springer.com/10.1007%2Fs11760-018-1306-9"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11760-018-1306-9'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11760-018-1306-9'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11760-018-1306-9'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11760-018-1306-9'
This table displays all metadata directly associated to this object as RDF triples.
123 TRIPLES
21 PREDICATES
41 URIs
19 LITERALS
7 BLANK NODES