Starting an eResearch revolution with deep learning.pdf.pdf (3.37 MB)
Starting an eResearch revolution with deep learning
presentationposted on 2021-02-26, 00:06 authored by Brent Martin, Aleksandra Pawlik
ABSTRACT / INTRODUCTION
Manaaki Whenua Landcare Research (MWLR), like most research institutes, both consumes and generates an ever-increasing amount of data. In particular, spatial data (images, hyperspectral data, spatial samples) are central to much of what MWLR does. MWLR has a strong track record of producing spatial data for consumption by research, including cleaned satellite imagery and GIS layers such as the Land Cover Database (LCDB)1 . MWLR also holds many nationally and internationally significant physical collections of flora and fauna. As well as the physical samples themselves, the metadata associated with these specimens is invaluable for further analysis, such as species distribution2 .
In recent years, machine learning (ML) has increasingly matured from a branch of computer science to a respected tool in the researchers’ toolbox. Most recently, deep learning has revolutionised computer vision, unlocking new opportunities to extract knowledge from images and other spatial data. For example, whereas ten years ago it was considered reasonable to be able to identify pollen grain species from images with 65-70% accuracy by both humans and computers, it is now straightforward to achieve accuracies exceeding 95% using deep learning3 .
Organically growing deep learning
At MWLR, we have begun a journey to dramatically increase the impact of our research by consuming/reconsuming data using machine learning, with a particular focus on deep learning. This is being achieved through a collaboration between the Informatics research team and the wider scientific cohort at MWLR. This process began with a small number of projects where the potential benefit was clear. Early results of these projects have been disseminated internally through webinars, leading to further projects being identified. In this initial stage, the emphasis is on rapidly achieving results and developing broad knowledge of tools and techniques. To date we have focussed on image classification through feature extraction4 , segmentation (U-net5 ) and object detection (Mask R-CNN6 ).
Through this initial exploratory phase, we have identified two fundamental barriers to uptake. First, researchers distrust “black box” models that do not add to our understanding of “why”, and that cannot explain how they reach a conclusion. We are addressing this first concern by exploring how classification decisions can be visualised to show what contributed to the outcome. For example, we have shown that a deep learning model can classify beech pollen species from images to over 80% accuracy, a task considered too difficult for even specialist humans. Because the researchers have been sceptical of this outcome, we have used occlusion sensitivity visualisations to demonstrate that for correct classifications the deep learning network is focussing on expected areas of the image, such as the pollen grains’ edge or texture, unlike for incorrect classifications. We are now investigating whether similar techniques exist that are suitable for image segmentation and object detection tasks. For segmentation problems, we can also manually investigate differences between the training data and predictions; in some cases the error may be inaccuracies in the training data, highlighting the potential for deep learning models to augment manual processes as a further benefit.
The second barrier to uptake is the quantity and quality of data needed. For image classification tasks, we have developed a novel method of utilising deep learning models for feature extraction that dramatically reduces both the number of training examples required as well as the processing requirements7 ; we have successfully built species identification models with good accuracy from as little as a few hundred images for domains such as fungal spores, coprosmas, moths and beech pollen. For segmentation tasks, we are experimenting with methods for bootstrapping imperfect training data through an iterative process of training weak models and using them to refine the training data with some additional manual correction where required8 . We are experimenting with this technique for identifying tree species from UAV orthomosaics where the class polygons are weakly inferred from tree stem positions obtained through ground-based surveys, and then subsequently refined based on the segmentation suggested by the model. It is hoped that such techniques will dramatically lower the effort required to build training sets for such tasks, increasing the value obtained from localised ground surveys by using the data to make inferences at regional or national scale. Finally, we are also exploring the impact of resolution on accuracy, to quantify the limits of scaling up small-scale surveys to be repeatable at the national level from more easily available spatial data such as hyperspectral satellite imagery
We have so far identified 12 projects, half of which are being actively pursued. We have also organised an internal “mini-symposium” which will present two case studies, as well as discussing machine learning and deep learning techniques. A “panel” session will then discuss further potential project ideas submitted by the audience. This approach has been successful in engaging further researchers; to date six further projects have been identified ranging from counting manuka flowers in images to extracting text from historical documents, and it is anticipated that the panel discussion will generate significant further interest.
ABOUT THE AUTHOR
Brent is a machine learning specialist at Manaaki Whenua Landcare Research. His career has spanned both academic research as a senior lecturer at Canterbury University, as well as software engineering and R&D roles in various commercial companies. Brent’s research in AI and machine learning includes developing new ML classification algorithms; applying ML to real-world problems such as electricity demand forecasting; research and development in Intelligent Tutoring Systems; developing social network analysis techniques for criminal investigation. Brent holds a PhD in Computer Science from the University of Canterbury, New Zealand focussing on artificial intelligence in education.
Aleksandra is an eResearch capability specialist at Manaaki Whenua Landcare Research, where she is assisting with the development of strategy, procedures and tools that promote data-driven science and research data management. She also organises and instructs workshops assisting researchers to develop their research data skills. In her career Aleksandra has been active in the UK’s Software Sustainability institute, where she led the institute’s training activities. Outside of academia, Aleksandra has worked as a Research Community Manager for the New Zealand eScience Infrastructure (NeSI), as a researcher for NHS Lothian projects and as a freelance IT consultant in the commercial sector. She is also an instructor for the Software Carpentry Foundation. Aleksandra holds a PhD in Computing from the Open University focussing on documentation in scientific software.
1. I. Bartomeus, J. R. Stavert, D. Ward and O. Aguado (2018): Historical collections as a tool for assessing the global pollination crisis, Philosophical Transactions of the Royal Society B: Biological SciencesVolume 374, Issue 1763
2. Sevillano V, Holt K, Aznarte JL (2020): Precise automatic classification of 46 different pollen types with convolutional neural networks. PLoS ONE 15(6): e0229751. https://doi.org/10.1371/journal.pone.0229751
3. Liang, H., Sun, X., Sun, Y. et al (2017). Text feature extraction based on deep learning: a review. J Wireless Com Network 2017, 211. https://doi.org/10.1186/s13638-017-0993-1
4. Ronneberger, Olaf; Fischer, Philipp; Brox, Thomas (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597
5. Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick. Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2961-2969
6. V. Vetrova, S. Coup, E. Frank and M. J. Cree (2018). Hidden Features: Experiments with Feature Transfer for Fine-Grained Multi-Class and One-Class Image Categorization. 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, pp. 1-6, doi: 10.1109/IVCNZ.2018.8634790.
7. Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. (2019). Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309.