Deep learning for real-time fruit detection and orchard fruit load estimation: benchmarking of ‘MangoYOLO’ View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-02-28

AUTHORS

A. Koirala, K. B. Walsh, Z. Wang, C. McCarthy

ABSTRACT

The performance of six existing deep learning architectures were compared for the task of detection of mango fruit in images of tree canopies. Images of trees (n = 1 515) from across five orchards were acquired at night using a 5 Mega-pixel RGB digital camera and 720 W of LED flood lighting in a rig mounted on a farm utility vehicle operating at 6 km/h. The two stage deep learning architectures of Faster R-CNN(VGG) and Faster R-CNN(ZF), and the single stage techniques YOLOv3, YOLOv2, YOLOv2(tiny) and SSD were trained both with original resolution and 512 × 512 pixel versions of 1 300 training tiles, while YOLOv3 was run only with 512 × 512 pixel images, giving a total of eleven models. A new architecture was also developed, based on features of YOLOv3 and YOLOv2(tiny), on the design criteria of accuracy and speed for the current application. This architecture, termed ‘MangoYOLO’, was trained using: (i) the 1 300 tile training set, (ii) the COCO dataset before training on the mango training set, and (iii) a daytime image training set of a previous publication, to create the MangoYOLO models ‘s’, ‘pt’ and ‘bu’, respectively. Average Precision plateaued with use of around 400 training tiles. MangoYOLO(pt) achieved a F1 score of 0.968 and Average Precision of 0.983 on a test set independent of the training set, outperforming other algorithms, with a detection speed of 8 ms per 512 × 512 pixel image tile while using just 833 Mb GPU memory per image (on a NVIDIA GeForce GTX 1070 Ti GPU) used for in-field application. The MangoYOLO model also outperformed other models in processing of full images, requiring just 70 ms per image (2 048 × 2 048 pixels) (i.e., capable of processing ~ 14 fps) with use of 4 417 Mb of GPU memory. The model was robust in use with images of other orchards, cultivars and lighting conditions. MangoYOLO(bu) achieved a F1 score of 0.89 on a day-time mango image dataset. With use of a correction factor estimated from the ratio of human count of fruit in images of the two sides of sample trees per orchard and a hand harvest count of all fruit on those trees, MangoYOLO(pt) achieved orchard fruit load estimates of between 4.6 and 15.2% of packhouse fruit counts for the five orchards considered. The labelled images (1 300 training, 130 validation and 300 test) of this study are available for comparative studies. More... »

PAGES

1-29

References to SciGraph publications

  • 2010-06. The Pascal Visual Object Classes (VOC) Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2016. SSD: Single Shot MultiBox Detector in COMPUTER VISION – ECCV 2016
  • 2014. Visualizing and Understanding Convolutional Networks in COMPUTER VISION – ECCV 2014
  • 2017-04. Machine vision for counting fruit on mango tree canopies in PRECISION AGRICULTURE
  • 2018-10-11. Estimation of fruit load in mango orchards: tree sampling considerations and use of machine vision and satellite imagery in PRECISION AGRICULTURE
  • 2015. A Feature Learning Based Approach for Automated Fruit Yield Estimation in FIELD AND SERVICE ROBOTICS
  • 2014. Microsoft COCO: Common Objects in Context in COMPUTER VISION – ECCV 2014
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11119-019-09642-0

    DOI

    http://dx.doi.org/10.1007/s11119-019-09642-0

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1112460844


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Central Queensland University", 
              "id": "https://www.grid.ac/institutes/grid.1023.0", 
              "name": [
                "Institute for Future Farming Systems, Central Queensland University, Bruce Highway, Building 361, 4701, Rockhampton, QLD, Australia"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Koirala", 
            "givenName": "A.", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Central Queensland University", 
              "id": "https://www.grid.ac/institutes/grid.1023.0", 
              "name": [
                "Institute for Future Farming Systems, Central Queensland University, Bruce Highway, Building 361, 4701, Rockhampton, QLD, Australia"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Walsh", 
            "givenName": "K. B.", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Central Queensland University", 
              "id": "https://www.grid.ac/institutes/grid.1023.0", 
              "name": [
                "Institute for Future Farming Systems, Central Queensland University, Bruce Highway, Building 361, 4701, Rockhampton, QLD, Australia"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Wang", 
            "givenName": "Z.", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Southern Queensland", 
              "id": "https://www.grid.ac/institutes/grid.1048.d", 
              "name": [
                "Centre for Agricultural Engineering (Operations), University of Southern Queensland, West Street, Building P9-132, 4350, Toowoomba, QLD, Australia"
              ], 
              "type": "Organization"
            }, 
            "familyName": "McCarthy", 
            "givenName": "C.", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.3390/s16081222", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005608545"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.3390/s16111915", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1008015271"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-009-0275-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014796149", 
              "https://doi.org/10.1007/s11263-009-0275-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-009-0275-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014796149", 
              "https://doi.org/10.1007/s11263-009-0275-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46448-0_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017177111", 
              "https://doi.org/10.1007/978-3-319-46448-0_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-07488-7_33", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1019374226", 
              "https://doi.org/10.1007/978-3-319-07488-7_33"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.compag.2012.11.009", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021270917"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.compag.2015.05.021", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1025104540"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.compag.2013.11.011", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1026899642"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1201/b17441-17", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032075089"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10590-1_53", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032233097", 
              "https://doi.org/10.1007/978-3-319-10590-1_53"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10602-1_48", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045321436", 
              "https://doi.org/10.1007/978-3-319-10602-1_48"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11119-016-9458-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1048308698", 
              "https://doi.org/10.1007/s11119-016-9458-5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/tpami.2016.2577031", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061745117"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.12988/ams.2015.53290", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1064853674"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.13031/2013.3096", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1064897318"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1002/rob.21699", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1083753722"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.3390/s17122738", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093072667"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2016.90", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093359587"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/icra.2017.7989417", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094478815"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iv.2014.54", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094508635"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2014.81", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094727707"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2015.169", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095573598"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2009.5206848", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095689025"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2016.91", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095811486"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2017.690", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095851797"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2017.106", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095852454"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2017.322", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1100060307"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2017.324", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1100060309"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.compag.2018.02.016", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101168054"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.19103/as.2017.0026.14", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1101195702"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/j.compag.2018.06.040", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1105052540"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11119-018-9614-1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107553295", 
              "https://doi.org/10.1007/s11119-018-9614-1"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-02-28", 
        "datePublishedReg": "2019-02-28", 
        "description": "The performance of six existing deep learning architectures were compared for the task of detection of mango fruit in images of tree canopies. Images of trees (n = 1 515) from across five orchards were acquired at night using a 5 Mega-pixel RGB digital camera and 720 W of LED flood lighting in a rig mounted on a farm utility vehicle operating at 6 km/h. The two stage deep learning architectures of Faster R-CNN(VGG) and Faster R-CNN(ZF), and the single stage techniques YOLOv3, YOLOv2, YOLOv2(tiny) and SSD were trained both with original resolution and 512 \u00d7 512 pixel versions of 1 300 training tiles, while YOLOv3 was run only with 512 \u00d7 512 pixel images, giving a total of eleven models. A new architecture was also developed, based on features of YOLOv3 and YOLOv2(tiny), on the design criteria of accuracy and speed for the current application. This architecture, termed \u2018MangoYOLO\u2019, was trained using: (i) the 1 300 tile training set, (ii) the COCO dataset before training on the mango training set, and (iii) a daytime image training set of a previous publication, to create the MangoYOLO models \u2018s\u2019, \u2018pt\u2019 and \u2018bu\u2019, respectively. Average Precision plateaued with use of around 400 training tiles. MangoYOLO(pt) achieved a F1 score of 0.968 and Average Precision of 0.983 on a test set independent of the training set, outperforming other algorithms, with a detection speed of 8 ms per 512 \u00d7 512 pixel image tile while using just 833 Mb GPU memory per image (on a NVIDIA GeForce GTX 1070 Ti GPU) used for in-field application. The MangoYOLO model also outperformed other models in processing of full images, requiring just 70 ms per image (2 048 \u00d7 2 048 pixels) (i.e., capable of processing ~ 14 fps) with use of 4 417 Mb of GPU memory. The model was robust in use with images of other orchards, cultivars and lighting conditions. MangoYOLO(bu) achieved a F1 score of 0.89 on a day-time mango image dataset. With use of a correction factor estimated from the ratio of human count of fruit in images of the two sides of sample trees per orchard and a hand harvest count of all fruit on those trees, MangoYOLO(pt) achieved orchard fruit load estimates of between 4.6 and 15.2% of packhouse fruit counts for the five orchards considered. The labelled images (1 300 training, 130 validation and 300 test) of this study are available for comparative studies.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1007/s11119-019-09642-0", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1135929", 
            "issn": [
              "1385-2256", 
              "1573-1618"
            ], 
            "name": "Precision Agriculture", 
            "type": "Periodical"
          }
        ], 
        "name": "Deep learning for real-time fruit detection and orchard fruit load estimation: benchmarking of \u2018MangoYOLO\u2019", 
        "pagination": "1-29", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "70877af64f2267d968b94765619588c47e95268b132aaa766ff2794edf3596b3"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11119-019-09642-0"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1112460844"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11119-019-09642-0", 
          "https://app.dimensions.ai/details/publication/pub.1112460844"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-11T10:35", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000349_0000000349/records_113667_00000005.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://link.springer.com/10.1007%2Fs11119-019-09642-0"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11119-019-09642-0'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11119-019-09642-0'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11119-019-09642-0'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11119-019-09642-0'


     

    This table displays all metadata directly associated to this object as RDF triples.

    178 TRIPLES      21 PREDICATES      56 URIs      16 LITERALS      5 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11119-019-09642-0 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N92ab17705be8452983d297ce19c356ff
    4 schema:citation sg:pub.10.1007/978-3-319-07488-7_33
    5 sg:pub.10.1007/978-3-319-10590-1_53
    6 sg:pub.10.1007/978-3-319-10602-1_48
    7 sg:pub.10.1007/978-3-319-46448-0_2
    8 sg:pub.10.1007/s11119-016-9458-5
    9 sg:pub.10.1007/s11119-018-9614-1
    10 sg:pub.10.1007/s11263-009-0275-4
    11 https://doi.org/10.1002/rob.21699
    12 https://doi.org/10.1016/j.compag.2012.11.009
    13 https://doi.org/10.1016/j.compag.2013.11.011
    14 https://doi.org/10.1016/j.compag.2015.05.021
    15 https://doi.org/10.1016/j.compag.2018.02.016
    16 https://doi.org/10.1016/j.compag.2018.06.040
    17 https://doi.org/10.1109/cvpr.2009.5206848
    18 https://doi.org/10.1109/cvpr.2014.81
    19 https://doi.org/10.1109/cvpr.2016.90
    20 https://doi.org/10.1109/cvpr.2016.91
    21 https://doi.org/10.1109/cvpr.2017.106
    22 https://doi.org/10.1109/cvpr.2017.690
    23 https://doi.org/10.1109/iccv.2015.169
    24 https://doi.org/10.1109/iccv.2017.322
    25 https://doi.org/10.1109/iccv.2017.324
    26 https://doi.org/10.1109/icra.2017.7989417
    27 https://doi.org/10.1109/iv.2014.54
    28 https://doi.org/10.1109/tpami.2016.2577031
    29 https://doi.org/10.1201/b17441-17
    30 https://doi.org/10.12988/ams.2015.53290
    31 https://doi.org/10.13031/2013.3096
    32 https://doi.org/10.19103/as.2017.0026.14
    33 https://doi.org/10.3390/s16081222
    34 https://doi.org/10.3390/s16111915
    35 https://doi.org/10.3390/s17122738
    36 schema:datePublished 2019-02-28
    37 schema:datePublishedReg 2019-02-28
    38 schema:description The performance of six existing deep learning architectures were compared for the task of detection of mango fruit in images of tree canopies. Images of trees (n = 1 515) from across five orchards were acquired at night using a 5 Mega-pixel RGB digital camera and 720 W of LED flood lighting in a rig mounted on a farm utility vehicle operating at 6 km/h. The two stage deep learning architectures of Faster R-CNN(VGG) and Faster R-CNN(ZF), and the single stage techniques YOLOv3, YOLOv2, YOLOv2(tiny) and SSD were trained both with original resolution and 512 × 512 pixel versions of 1 300 training tiles, while YOLOv3 was run only with 512 × 512 pixel images, giving a total of eleven models. A new architecture was also developed, based on features of YOLOv3 and YOLOv2(tiny), on the design criteria of accuracy and speed for the current application. This architecture, termed ‘MangoYOLO’, was trained using: (i) the 1 300 tile training set, (ii) the COCO dataset before training on the mango training set, and (iii) a daytime image training set of a previous publication, to create the MangoYOLO models ‘s’, ‘pt’ and ‘bu’, respectively. Average Precision plateaued with use of around 400 training tiles. MangoYOLO(pt) achieved a F1 score of 0.968 and Average Precision of 0.983 on a test set independent of the training set, outperforming other algorithms, with a detection speed of 8 ms per 512 × 512 pixel image tile while using just 833 Mb GPU memory per image (on a NVIDIA GeForce GTX 1070 Ti GPU) used for in-field application. The MangoYOLO model also outperformed other models in processing of full images, requiring just 70 ms per image (2 048 × 2 048 pixels) (i.e., capable of processing ~ 14 fps) with use of 4 417 Mb of GPU memory. The model was robust in use with images of other orchards, cultivars and lighting conditions. MangoYOLO(bu) achieved a F1 score of 0.89 on a day-time mango image dataset. With use of a correction factor estimated from the ratio of human count of fruit in images of the two sides of sample trees per orchard and a hand harvest count of all fruit on those trees, MangoYOLO(pt) achieved orchard fruit load estimates of between 4.6 and 15.2% of packhouse fruit counts for the five orchards considered. The labelled images (1 300 training, 130 validation and 300 test) of this study are available for comparative studies.
    39 schema:genre research_article
    40 schema:inLanguage en
    41 schema:isAccessibleForFree false
    42 schema:isPartOf sg:journal.1135929
    43 schema:name Deep learning for real-time fruit detection and orchard fruit load estimation: benchmarking of ‘MangoYOLO’
    44 schema:pagination 1-29
    45 schema:productId N6429ecb3192b4755b7637d97cc74e72d
    46 Nb944f52fc3e64c3e8b41d589ea850589
    47 Ne48571f4172140f5a241fefc1c7c6ab8
    48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112460844
    49 https://doi.org/10.1007/s11119-019-09642-0
    50 schema:sdDatePublished 2019-04-11T10:35
    51 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    52 schema:sdPublisher N8d7fabb45db8450399b0584b6ea78fbf
    53 schema:url https://link.springer.com/10.1007%2Fs11119-019-09642-0
    54 sgo:license sg:explorer/license/
    55 sgo:sdDataset articles
    56 rdf:type schema:ScholarlyArticle
    57 N6429ecb3192b4755b7637d97cc74e72d schema:name dimensions_id
    58 schema:value pub.1112460844
    59 rdf:type schema:PropertyValue
    60 N7a62913c7de84449976cd0062b0e596b rdf:first Nec73d243469c4605bf28cd279a0bc715
    61 rdf:rest Nfe6d4ac712ca4e13bb69cedb3628cef9
    62 N8d7fabb45db8450399b0584b6ea78fbf schema:name Springer Nature - SN SciGraph project
    63 rdf:type schema:Organization
    64 N92ab17705be8452983d297ce19c356ff rdf:first N98ce0e9a28c84661a9a59c267c5ee786
    65 rdf:rest Nc23b589cd4f844bd968876ac96e2b23b
    66 N93f93b7483b04553a56d3dc6ed04b229 schema:affiliation https://www.grid.ac/institutes/grid.1048.d
    67 schema:familyName McCarthy
    68 schema:givenName C.
    69 rdf:type schema:Person
    70 N98ce0e9a28c84661a9a59c267c5ee786 schema:affiliation https://www.grid.ac/institutes/grid.1023.0
    71 schema:familyName Koirala
    72 schema:givenName A.
    73 rdf:type schema:Person
    74 Nb84821e68d7b4b88bedb21a2b2b05cb5 schema:affiliation https://www.grid.ac/institutes/grid.1023.0
    75 schema:familyName Walsh
    76 schema:givenName K. B.
    77 rdf:type schema:Person
    78 Nb944f52fc3e64c3e8b41d589ea850589 schema:name doi
    79 schema:value 10.1007/s11119-019-09642-0
    80 rdf:type schema:PropertyValue
    81 Nc23b589cd4f844bd968876ac96e2b23b rdf:first Nb84821e68d7b4b88bedb21a2b2b05cb5
    82 rdf:rest N7a62913c7de84449976cd0062b0e596b
    83 Ne48571f4172140f5a241fefc1c7c6ab8 schema:name readcube_id
    84 schema:value 70877af64f2267d968b94765619588c47e95268b132aaa766ff2794edf3596b3
    85 rdf:type schema:PropertyValue
    86 Nec73d243469c4605bf28cd279a0bc715 schema:affiliation https://www.grid.ac/institutes/grid.1023.0
    87 schema:familyName Wang
    88 schema:givenName Z.
    89 rdf:type schema:Person
    90 Nfe6d4ac712ca4e13bb69cedb3628cef9 rdf:first N93f93b7483b04553a56d3dc6ed04b229
    91 rdf:rest rdf:nil
    92 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    93 schema:name Information and Computing Sciences
    94 rdf:type schema:DefinedTerm
    95 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    96 schema:name Artificial Intelligence and Image Processing
    97 rdf:type schema:DefinedTerm
    98 sg:journal.1135929 schema:issn 1385-2256
    99 1573-1618
    100 schema:name Precision Agriculture
    101 rdf:type schema:Periodical
    102 sg:pub.10.1007/978-3-319-07488-7_33 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019374226
    103 https://doi.org/10.1007/978-3-319-07488-7_33
    104 rdf:type schema:CreativeWork
    105 sg:pub.10.1007/978-3-319-10590-1_53 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032233097
    106 https://doi.org/10.1007/978-3-319-10590-1_53
    107 rdf:type schema:CreativeWork
    108 sg:pub.10.1007/978-3-319-10602-1_48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045321436
    109 https://doi.org/10.1007/978-3-319-10602-1_48
    110 rdf:type schema:CreativeWork
    111 sg:pub.10.1007/978-3-319-46448-0_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017177111
    112 https://doi.org/10.1007/978-3-319-46448-0_2
    113 rdf:type schema:CreativeWork
    114 sg:pub.10.1007/s11119-016-9458-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1048308698
    115 https://doi.org/10.1007/s11119-016-9458-5
    116 rdf:type schema:CreativeWork
    117 sg:pub.10.1007/s11119-018-9614-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107553295
    118 https://doi.org/10.1007/s11119-018-9614-1
    119 rdf:type schema:CreativeWork
    120 sg:pub.10.1007/s11263-009-0275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014796149
    121 https://doi.org/10.1007/s11263-009-0275-4
    122 rdf:type schema:CreativeWork
    123 https://doi.org/10.1002/rob.21699 schema:sameAs https://app.dimensions.ai/details/publication/pub.1083753722
    124 rdf:type schema:CreativeWork
    125 https://doi.org/10.1016/j.compag.2012.11.009 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021270917
    126 rdf:type schema:CreativeWork
    127 https://doi.org/10.1016/j.compag.2013.11.011 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026899642
    128 rdf:type schema:CreativeWork
    129 https://doi.org/10.1016/j.compag.2015.05.021 schema:sameAs https://app.dimensions.ai/details/publication/pub.1025104540
    130 rdf:type schema:CreativeWork
    131 https://doi.org/10.1016/j.compag.2018.02.016 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101168054
    132 rdf:type schema:CreativeWork
    133 https://doi.org/10.1016/j.compag.2018.06.040 schema:sameAs https://app.dimensions.ai/details/publication/pub.1105052540
    134 rdf:type schema:CreativeWork
    135 https://doi.org/10.1109/cvpr.2009.5206848 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095689025
    136 rdf:type schema:CreativeWork
    137 https://doi.org/10.1109/cvpr.2014.81 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094727707
    138 rdf:type schema:CreativeWork
    139 https://doi.org/10.1109/cvpr.2016.90 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093359587
    140 rdf:type schema:CreativeWork
    141 https://doi.org/10.1109/cvpr.2016.91 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095811486
    142 rdf:type schema:CreativeWork
    143 https://doi.org/10.1109/cvpr.2017.106 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095852454
    144 rdf:type schema:CreativeWork
    145 https://doi.org/10.1109/cvpr.2017.690 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095851797
    146 rdf:type schema:CreativeWork
    147 https://doi.org/10.1109/iccv.2015.169 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095573598
    148 rdf:type schema:CreativeWork
    149 https://doi.org/10.1109/iccv.2017.322 schema:sameAs https://app.dimensions.ai/details/publication/pub.1100060307
    150 rdf:type schema:CreativeWork
    151 https://doi.org/10.1109/iccv.2017.324 schema:sameAs https://app.dimensions.ai/details/publication/pub.1100060309
    152 rdf:type schema:CreativeWork
    153 https://doi.org/10.1109/icra.2017.7989417 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094478815
    154 rdf:type schema:CreativeWork
    155 https://doi.org/10.1109/iv.2014.54 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094508635
    156 rdf:type schema:CreativeWork
    157 https://doi.org/10.1109/tpami.2016.2577031 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061745117
    158 rdf:type schema:CreativeWork
    159 https://doi.org/10.1201/b17441-17 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032075089
    160 rdf:type schema:CreativeWork
    161 https://doi.org/10.12988/ams.2015.53290 schema:sameAs https://app.dimensions.ai/details/publication/pub.1064853674
    162 rdf:type schema:CreativeWork
    163 https://doi.org/10.13031/2013.3096 schema:sameAs https://app.dimensions.ai/details/publication/pub.1064897318
    164 rdf:type schema:CreativeWork
    165 https://doi.org/10.19103/as.2017.0026.14 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101195702
    166 rdf:type schema:CreativeWork
    167 https://doi.org/10.3390/s16081222 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005608545
    168 rdf:type schema:CreativeWork
    169 https://doi.org/10.3390/s16111915 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008015271
    170 rdf:type schema:CreativeWork
    171 https://doi.org/10.3390/s17122738 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093072667
    172 rdf:type schema:CreativeWork
    173 https://www.grid.ac/institutes/grid.1023.0 schema:alternateName Central Queensland University
    174 schema:name Institute for Future Farming Systems, Central Queensland University, Bruce Highway, Building 361, 4701, Rockhampton, QLD, Australia
    175 rdf:type schema:Organization
    176 https://www.grid.ac/institutes/grid.1048.d schema:alternateName University of Southern Queensland
    177 schema:name Centre for Agricultural Engineering (Operations), University of Southern Queensland, West Street, Building P9-132, 4350, Toowoomba, QLD, Australia
    178 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...