A Realtime Camera Fusion 3D Model with On-the-Go Calibration for Tracking Smoke Plumes

    With the increase in the severity and intensity of wildfires in places such as Greece and California, the imagery and modeling of wildfires have never been more crucial. Although many methods on the market use infrared monitoring or pre-calibrated cameras to triangulate the spread of a wildfire, most of these are highly impractical in places of extreme poverty due to their high cost and centralized nature. Therefore, this project proposes a community-based approach to detecting wildfires through tracking smoke plumes; in essence, any camera from a mounted VAPIX to a smartphone can be utilized to form a fusion of where the smoke plumes are. 


    However, for any mounted camera to predict the location of a smoke plume miles away, it needs to know parameters like pitch, distortion, and zoom. To fine-tune said parameters, optimization algorithms like Levenberg-Marquardt can correct for errors based on known geolocations, star identifications, and feature matching, a computationally and analytically complex task that had previously prevented phones from being used instead of mounted cameras. 


    After calibration using said algorithms, pre-trained models like ORB or SAM can separate and match different sections of a smoke plume to produce a triangulation with minimal error. Finally, to monitor the spread of the wildfire, manual or automatic annotations can be used in conjunction with optical flow. 


    The algorithm will mainly be written in Python, Javascript, and HTML, and the accuracy of the model will be evaluated by comparing historical images of smoke plumes with fire perimeters. 

Comments