Source: NWB SENSORS INC. submitted to NRP
INTELLIGENT MAPPING OF THE FARM USING LOW-COST, GPS-ENABLED CAMERAS DURING EXISTING FARM ACTIVITIES
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1012610
Grant No.
2017-33610-26744
Cumulative Award Amt.
$99,853.00
Proposal No.
2017-00181
Multistate No.
(N/A)
Project Start Date
Jun 1, 2017
Project End Date
Oct 31, 2018
Grant Year
2017
Program Code
[8.12]- Small and Mid-Size Farms
Recipient Organization
NWB SENSORS INC.
80555 GALLATIN RD
BOZEMAN,MT 59718
Performing Department
(N/A)
Non Technical Summary
With the advent of precision agriculture, the way we farm is changing. Technologies exist to add precision to all aspects of the farm. Through "mapping the farm," precision management strategies can be implemented such as: targeted herbicide application, targeted soil management, improved yield predictions, improved management of water, and other farm resources. These practices all require information that is not readily available to the small farm. Current field mapping technology has a financial and time commitment that puts it out of reach of many small farms. The aim of the proposed research is to enable precision agriculture on the farm by augmenting existing farm activities where the farmer drives the whole field. These activities include but are not limited to: tilling the soil, seeding the crop, spraying for weeds or pests, and harvesting the crop.The rapid emergence of the action camera market has brought rugged wide-field-of-view GPS-enabled cameras into the reach of the common consumer and thus the small farm. Using these cameras as part of an automated imagery based mapping system will bring value to the small farmer. The mapping will be achieved through a combined effort of characterizing these cameras, developing machine vision routines for object identification, and careful radiometric inversions to produce accurate color images in real-world lighting conditions. Once these variables are defined, a software platform can be developed to deliver valuable information enabling precision agriculture on the small farm.
Animal Health Component
40%
Research Effort Categories
Basic
10%
Applied
40%
Developmental
50%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
4025310202050%
4045310202050%
Goals / Objectives
The aim of this project is to develop a platformwhichenables precision mapping of fields on small farms using GPS-enable action cameras combined with a robust machine vision algorithms. Toenable this system, the ability of these cameras to produce sufficiently accurate object recognition and color analysis needs to be proven. Proving the ability of these cameras is the driving focus of the this Phase 1 project.The primary goals of the project are to:1. Demonstrate data collection to mapping on small farms.Objectivs are:a. Deploy on combine harvesters during the grain harvestb. Provide maps with a fast turnaround to allow post-harvest herbicide application decisions.2. Develop robust machine vision algorithmstodetect features in farm fields.Objectives are:a. Build algorithms that areable to handle changes in incident lighting (sunny to clouds, etc.)b. Investigate the uses of incident lighting sensors both at a normalization input and a direct input to the machine vision algorithmc. Biuld algorithms that are robust to scene changes to do BRDF3. Determine if these action cameras can be used for accurate colorimetry of crops and soils from in field platforms such as tractors and combines.Objectives are:a. Determine ifBRDF parameterization and inversion can be implemented using images from action cameras taken over multiple observation geometries.b. Use incident lighting sensors to provide a required information to derive crop and soil color indices from action cameras.4. Build a data set of images collected during real-world on farm activities form real-world on farm platforms.a.Analyze thedata set of grain harvest images from Montana State University to add classificaitons of weather and lighting conditions.b. Collect imagery over a wider set of farming activities including: sparying, tilling, harvest, and other activities incompasing a varity of crops and fallow groundin a variety ofdiffernt ligthing conditions.c. Classify the collected imagery to use training and testing data sets for algorithm development.
Project Methods
This Phase 1 work is broken down into three main categories: hardware and integration (~10%), data collection (~35%), and algorithm evelopment and data analysis (~55%).To understand these action cameras a full characterization of these cameras will be conducted. This analysis will include: radiometric, spectral, and noise measurements of these cameras and the variability between a set of 5 camers with non-sequential serial numbers.After an understanding of these cameras they will be deployed during real-world farming activities through cooperation with local farmers. This data collection will encompas a variety of crops, lighting conditions, stages of crop maturity, and farm vehicles. We will take care to include both ideal conditions (clean windows and good lighting) and non-ideal conditions(dirty windows, dust, pooror changinlighting) into this dataset. Manual and assisted classifications will be performed on this datset to build a set of training and development data for the classificaiton algoirthms.Algorith development and data analysis will focus on building algorithms that can acomodate the non-ideal conditions in the data sets. This effort will be encompas both pure-machine vision and assisted machine vision (BRDF correction, radiometric calibration, etc) approaches.Evaluation of the project will focus on the ability of these imaging systems to provide accurate image classification. Standard accuracy analysis method from the machine vision community will be used to determine deteciton and classification accuray. A secondary criteria will be the ability to provide these maps in a user friendy format in time for farm management decisions to be made.

Progress 06/01/17 to 10/31/18

Outputs
Target Audience:The target audiance we reach were the growers and researchers we collaberated with during the data collection effort of this project. Growers were limited to those nearby in Montana in include 10 growers, half of these growers operated small to mid-sized farms. We were graciously allowed to expand our data collection efforts by locating our equipment on farming activies in the research fields operated by Montana State University at the Post Farms and Fort Ellis research stations. Changes/Problems:Initial problems during 2017 led to NWB Sensors, Inc. requesting an extension for the project period through the 2018 growing season. The details of these problems and their solutions are explained below. During the summer 2017 growing season we cooperated with growers throughout Montana to collect imagery during harvest, plowing, and spraying of their fields. However, initial funding delays limited our ability to work closely with the growers during critical data collection periods. While we waited for SBIR funds to be released we purchased equipment and distributed it to cooperating farmers, but due to our limited funds on hand could not spend detailed time with the cooperators. This led to operator and camera mounting issues that effected data quality. Less than 50% of farmer collected data was returned to NWB Sensors, and of the returned data nearly 30% was not useable for its intended purposes without additional pre-processing and data quality correction. This was addressed by a second deployment during 2018 that utilized a camera controller so that image collection was consistent between growers. There were initially problems developing our machine vision algorithms. The original development path produced an algorithm that took 60 seconds per image on a computer without a high-end GPU. Data were collected every 4 seconds in the field to ensure overlap between images. The long time to process the images made the process impractical, and this fully custom software program was abandoned. This led us to the alternative solution of using a combination of proprietary software to detection regions requiring classification in the image then then using Google's open source and commercial friendly Inception v3 for object classification. Inception v3 is highly optimized and transfer learning using images collected in the fields allowed rapid and accurate object classification. Using this optimized processing chain has reduced processing time to 1 to 2 seconds per image on a standard computer. What opportunities for training and professional development has the project provided?This project has allowed us to hire a student from Montana State University as an intern who greatly benefited from the work conducted under the project. This student has now graduated and taken a research job with a large corporation. How have the results been disseminated to communities of interest?The results of the research enabled by this SBIR have been deiminated in detail to a farmer we worked closely with, Terry Nugent. He has assisted in weed identification and validation of our resulting maps. Furthermore, due to matching funds we received from the state of Montana to expand upon the SBIR research we have submitted quarterly reports at their request to the Montana State Matching Funds Program. We have had many conversations about the high-level concepts of the technology with many farmers. What do you plan to do during the next reporting period to accomplish the goals? Nothing Reported

Impacts
What was accomplished under these goals? 1: Demonstrate data collection to mapping on small farms Objective a: Deploy on combine harvesters during the grain harvest Achievement: Through the support of the USDA SBIR Phase I program NWB Sensors has developed and deployed a prototype system with collaborating growers during the 2017 and 2018 growing season on farms across Montana. The hardware of the system is comprised of a small durable GPS-enabled camera and an embed computer to control camera acquisition, both mounted inside the cab of the vehicle. The software platform processes the imagery collected by the cameras into maps of detected objects and is based on a combination of custom proprietary software and open source image classification software. Objective b: Provide maps with a fast turnaround to allow post-harvest herbicide application decisions. Achievement: Our processing system has advanced to the level of being able to provide maps of the fields to the growers. Within this system we are able to identify patches of weeds and are able to distinguish between different types or species of weeds with high accuracy. 2: Develop robust machine vision algorithms to detect features in farm fields. Objective a: Build algorithms that are able to handle changes in incident lighting (sunny to clouds, etc.) Achievement: Built a machine vision platform that is a combination of custom and open source software programs capable of processing field imagery in a variety of lighting conditions. The first step of the object classification process is to identify regions of interest within the image. These regions of interest are identified by two parallel and complimentary processes image difference detection and a segmented region classification. These processes are combined into a classification mask that identifies regions of the image that need to be passed to machine vision software for further classification. Objective b: Investigate the uses of incident lighting sensors both at a normalization input and a direct input to the machine vision algorithm. Achievement: Initial work had considered locating a sensor on the harvester, however this was determined to be impractical due to the high dust in this environment, and therefore was located nearby in the field. Using data collected by this sensor and integrating it with our lighting corrections systems it was determined that this sensor was not required to provide accurate lighting corrections and/or object detections. This is beneficial as it greatly simplifies the hardware requirements in the future work of this project. Objective c: Build algorithms that are robust to scene changes due to BRDF Achievement: The image classification algorithm has shown robustness against lighting changes. This has been achieved by building a training data set that contains a wide variety of illuminations and perceived colors. The machine vision software is based on TensorFlow and utilizes transfer learning with Inception v3, a convolutional neutral network created by Google that has won numerous image recognition competitions. Incorporating a variety of lighting and color conditions for objects of the same class in the training data has proven to provide a system that can detect and classify objects even during changing lighting conditions. 3: Determine if these action cameras can be used for accurate colorimetry of crops and soils from in field platforms such as tractors and combines. Objective a: Determine if BRDF parameterization and inversion can be implemented using images from action cameras taken over multiple observation geometries. Achievement: Through the support of this USDA SBIR program NWB Sensors developed methods that use a time history of in field observations to determine an estimated BRDF for the crop and implement color correction methods that use the crops estimated BRDF, camera's view angle and scene illumination angle to perform scene color correction across multiple images. This work as has been implemented using data collected from a combine harvester. Given that the combine sees the crop from multiple angles over the course of the day (all while the sun is moving across the sky) the various viewing and illumination angles (azimuth and elevation) allow the Rahman-Pinty-Verstraete parameterization of BRDF to be inverted. This was implemented via a genetic algorithm. The azimuth and elevation of the sun (the illumination source) are determined from the latitude, longitude, and GPS time from the camera. The azimuth and elevation at which the camera is viewing the crop is determined by the bearing of the camera (from GPS) and the tilt angle of the camera (either determined experimentally, measured, or taken from the tip/tilt sensor of the camera). Once an illumination model is built images that do not fall on the BRDF model line can be shifted to match, thus removing inconsistent illumination conditions, or all the images can be transformed to match a single illumination condition. Objective b: Use incident lighting sensors to provide a required information to derive crop and soil color indices from action cameras. Achievement: Deployed incident lighting sensors in the field near the combine harvester. Incorporated information from these lighting sensors into the BRDF inversions to supplement and constrain the data inversions. It was determined that consistent color representation in the imagery could be achieved without the need of an incident lighting sensor. 4. Build a data set of images collected during real-world on farm activities form real-world on farm platforms. Objective a: Analyze the data set of grain harvest images from Montana State University to add classifications of weather and lighting conditions. Achievement: All images collected during this project and licensed from Montana State University were processed to produce a consistent set of required information, and these data were added to the meta-data tags of every image including both EXIF (camera and GPS) and XMP (Sun, weather, and farming information). For the MSU data GPS locations were taken from the GPS data logger or transferred from images collected by other co-located cameras. When not present in the GPS data vehicle heading was calculated using a time history of GPS positions. The Sun position was calculated using the GPS derived vehicle location and time. All these data along with crop, farming activity, weather conditions, region, grower information, and licensing information were added to the imagery. Objective b: Collect imagery over a wider set of farming activities including: spraying, tilling, harvest, and other activities encompassing a variety of crops and fallow ground in a variety of different lighting conditions. Achievement: Imagery was collected during the 2017 and 2018 growing seasons that include additional crops (oats, corn, lentils, garbanzo beans, mustard, and others) and additional activities (desiccation, spraying, tilling, swathing, planting, and others). This project has enabled NWB Sensors to expand the image data set by another 660,000 images and over 220 hours of continuous video at 4K resolution. Thus, the total dataset now including data licensed from Montana State University includes over 1 million images and 220 hours of video taken by a variety of cameras in a variety weather and lighting conditions. Objective c: Classify the collected imagery to use training and testing data sets for algorithm development. Achievement: Select images in our classified imagery were hand classified by human observers where objects of interest were identified within the image, 3300 in scene objects have been classified, and 68,000 single object small images. These data are used to train the object detection and object classification routines.

Publications


    Progress 06/01/17 to 05/31/18

    Outputs
    Target Audience:NWB Sensors worked with grain growers and Agriculture researchersin Montana during the period coverd by thisreport. During the 2017harvest (cereal grains and pulse) and fall planting (winter wheat)NWB Sensors collaberated with growers and researchers to imagery. Cameras were placed on farmers' equipment as the fields were worked. This imagery is currently being used to train our machine vision platform. Changes/Problems:There have not been any major changes, however there have been minor changes that have occured during data collection. During the 2017 data collection errors with camera operation lead to data not being collected, or data qaulity issue. Therefore, NWB Sensors is using the wireless controll of the camera and a embeded linux computer to automate control of the cameras in future data collection. We had indeded to implement an in house machine vision solution for object classificaiton. During the implementatio of this solution we learned that the overhead of this platform was making it too slow to be useable on a non-high end computer platform. In January 2018 NWB Sensros approached Google to better understand their machine vision applicaitons. They have directed us toward their open source TensorFlow platform and provided assitance in understandingtheir existing solutions. One of which has been implemented at the object classification layer in our system. What opportunities for training and professional development has the project provided?This research allowed NWB Sensors to hire an engineering intern from Montana State University. To acomplish his assigned tasks he was trained by NWB staff on the Python programming languate, machine vision, data classificaiton, and xml data structures. When asked he also reports that this experience has helped him improve his time management skills, and confidence working with the public through his interactions with local growers during field data collection. How have the results been disseminated to communities of interest? Nothing Reported What do you plan to do during the next reporting period to accomplish the goals?During the next reporting period we will: Utilize camera controll software operating on a single board computer to make data collection consistent between operators and conditions. Demonstrate quick turn around processing allowing growers to have actionalable maps and information about their fields, Target data colleciton and full processing on fields where multiple years of data are avialable. Demonstrate use of the co-located incident light sensor and determine if the added hardware costs are justified.

    Impacts
    What was accomplished under these goals? The following acomplishments have been achieved under each of the goals: Goal 1: Demonstrate data collection to mapping on small farms. Data collection occured during the 2017 Summer Spraying of Fallow Crops, Fall Grain Harvest, and Winter Wheat Planting. Delays in algorithm development hasve delayed quick turn around of maps for the grower. NWB Sensors will demonstrate this during the 2018 growing season. Goal 2.Develop robust machine vision algorithmstodetect features in farm fields. Algorithms have been developed to identify regions of crop/non-crop in the imagery and by tracking crop regions in time have allowed us to identify when changes in lighting occur. Color correction algoirthms have been developed for the imagery to allow for constistent color during changing lighting conditions. Invesitigation using nearby incident lighting sensors have suggested that a total intensity incident lighting sensor could improved the performance of the color corrections and the BRDF inversion (described under goal 3). Therefore further studieswill be implemented in the 2018 summer. Goal 3. Determine if these action cameras can be used for accurate colorimetry of crops and soils from in field platforms such as tractors and combines. a. A genetic algorithm has been developed that performs a BRDF inversion for crop scenes. This uses sun position, knowledge of the crop, and a time history of images (multiple sun angles) to derive the BRDF of the crop. This allows use to take any camera view and sun angle and calculate a color transfer matrix to any other camera and sun angle. Work is ongoing to apply this technique to scenes of soil during plowing. b. Work with the incident lighting sensor is on going and will be demonstrated during the fall 2018 harvest. 4. Build a data set of images collected during real-world on farm activities form real-world on farm platforms. The imagery licensed from Montana State University have been processed to add vehicle heading and sun position to each image. Sun position was added using the GPS coordinates and time. Heading was provided by some cameras, but when not available was calculated using a time history of GPS locations. This allowed for characterization of the view and illumination angle for each image. The same data were added to every image collected during the 2017 experiments and prior.Classificaiton of cloud cover, smoke, and atmospheric turbidity in each image is ongoing. During 2017 data collection included: spraying, tilling, harvest, and fixed baseline imagery of the crops during different stages of maturity. Crops in the current image sets include: spring wheat, winter wheat, barley, lentils, corn, garbanzo beans, faba beans, dry peas, oats,summer fallow, harvested stubble, and bare ground.Classification is a continously on-going process. Currently the data set consists of 2500+classified in scene objects, and 41,000 classified sub-scenes. These classified objects and scenes are being used to train our object detection and object classificaiton routines. The classified sub-scenes is being grown both by direct human input, and by object identification by our machine vision platform with human validation. This process has been used to greatly improve our weed object classes and was instramental in building ourwild oats, fox tail, and treeclasses. Recently this process has created it first new class identifying prickly lettuce. The machine vision platform identified these weedsas objects of type weed with high confidence but with low confidence on the type of weed. Human intervention then observed the images and classified them leading to a new weed detection class.

    Publications