28 Schmidt et al: Analysis of the Accuracy of Photo-Based Plant Identification Applications they need to be identified for sufficient understanding will likely be different depending on the goals or use of the identification information. For example, the identifi- cation of Fraxinus species to genus might be accept- able in order to determine which trees are susceptible to infection from emerald ash borer (Agrilus planipennis), while identifying maples to species might be crucial to understanding a specific tree’s susceptibility to storm damage, drawing distinctions between the sturdy Acer saccharum and the weak-wooded Acer saccharinum. In terms of ecology, as each species has a specific set of preferred environmental conditions, under- standing the species distribution within an area can help to attain a better working knowledge of the intri- cacies of the system being studied (Robichaud and Buell 1973; Trowbridge and Bassuk 2004). In a natu- ral setting, the linkage between site conditions and species distribution helps to illuminate trends in hydrology and soil types across a community, and by applying these ideas to urban settings, understanding the disconnect between site conditions and species selection (Trowbridge and Bassuk 2004) can be used to guide disease and pest management decisions, as well as future planting stock selections (Laćan and McBride 2008; Scharenbroch et al. 2017). A thorough knowledge of tree identification is needed to provide the plant-community inventory prior to making a site management plan or gaining an understanding of plant-community–site relationships. There is growing evidence that volunteers can pro- duce valid data streams in generating urban commu- nity inventories, particularly at the genus level (Bancks et al. 2018), with the associated community steward- ship benefits that come with citizen science engage- ment (Roman et al. 2017; Crown et al. 2018). To this end, community volunteers with varied levels of background training and, more generally, less experi- enced botanists and tree care professionals may use apps which offer help in identifying plants while in the field or at home from captured field images. To use the typical app, the observer simply needs to take a close-up photograph of the tree (most fre- quently of the leaf, bark, flower, or fruit) and upload it to the app. Once uploaded, some apps prompt the user to specify the character being tested (again, usu- ally either the leaf, bark, flower, or fruit) and then the app will compare the user’s photograph to photo- graphs within its system (Joly et al. 2014; Barré et al. 2017; Bilyk et al. 2020). The output is a listing of one ©2022 International Society of Arboriculture or more suggestions as to what the identity of the plant may be. The first listed suggestion is viewed as the primary identification for the plant and is hence- forth referred to as the “Identification.” Many apps provide additional suggestions for the identity of the plant (henceforth referred to as simply “Suggestions”) in order to allow for some error in the primary identi- fication. For a thorough review of the development and logic of plant identification apps, please refer to Wäldchen and Näder (2018). Although these apps are often considered to be extremely helpful in species identification, there has been little done to compare the identification preci- sion and accuracy of these apps as a whole, therefore we sought to inform our conversations with students, community volunteer groups, and beginning profes- sionals. The lack of information beyond the details and claims produced by the developer reflects the dif- ficulty in direct comparison in a technical sense. A challenge, as detailed by Xing et al. (2020), is that the systems do not share data sets, system training approaches, common flora, or focal plant organs, much less a comparable user interface (Cope et al. 2012; Kumar et al. 2012; Goëau et al. 2013; Wang et al. 2013; Keivani et al. 2020). Generally, apps are devel- oped in a machine-learning environment where func- tion improves as additional data is accumulated, an evolving “intelligence” that is based on an algorithm using a probability-based neural network in some form. Such derived code can be pressed against open- source image sets such as Flavia (Wu et al. 2007) and the Folio data set (Munisami et al. 2015), which can then be automated into an image analysis as was developed by Keivani et al. (2020). Additional data sets have been used elsewhere, such as the Swedish Leaf data set (Söderkvist 2001) or the LeafSnap image libraries used by Kumar et al. (2012). Gener- ally speaking, the resulting code calibration yields results with incredibly high accuracy, often exceed- ing 95% (Kumar et al. 2012; Goëau et al. 2013; Wang et al. 2013; Keivani et al. 2020). Such accuracy can- not be assumed to predict the efficacy of the tools once beyond the code training environment, but accu- racy claims would certainly flow from the initial training phase. Our study uses the tools beyond this training phase, specific to our limited purpose, with non-curated field images. Our protocol to standardize and avoid extraneous nontarget information was chosen to avoid deflation of accuracy due to the photo quality.
January 2022
| Title Name |
Pages |
Delete |
Url |
| Empty |
Ai generated response may be inaccurate.
Search Text Block
Page #page_num
#doc_title
Hi $receivername|$receiveremail,
$sendername|$senderemail wrote these comments for you:
$message
$sendername|$senderemail would like for you to view the following digital edition.
Please click on the page below to be directed to the digital edition:
$thumbnail$pagenum
$link$pagenum
Your form submission was a success.
Downloading PDF
Generating your PDF, please wait...
This process might take longer please wait