Field test “pixel to GPS”
In the context of deep learning research of our lab, an interesting problem came up: given the geographic coordinates (“GPS” for convenience) and height of the drone, its heading, the orientation of its camera in 3D space, and the intrinsic parameters of the camera (resolution, field of view…), can we compute the GPS coordinates of a given pixel of the camera’s image? This is a photogrammetry problem, and solving it would allow us to pinpoint the location of objects that were detected via some deep learning algorithm in image space (i.e. a certain pixel in the image).
After some work, we developed a solution in-lab, but we needed to test it with real-world data. For this data-collection task, we used one of our Phantom 4 drones, and devised a scenario inside the Okutama baseball field. We laid some markers in the field, and measured their GPS position to serve as ground-truth. We then flew the drone at different altitudes and took pictures from different angles and tilts.
Example of telemetry for the picture above:
Latitude 35.80359161 deg
Longitude 139.09391617 deg
Height ATOE 29.1 m
Heading -172.5 deg
Camera tilt 23.0 deg
Together with the pictures, we saved the telemetry of the drone, including GPS position, height, heading and tilt of the camera. One thing we had to have in mind was to save the height above takeoff elevation (ATOE), and not the altitude value read from the GPS, because the latter is quite inaccurate for our application. It has a nominal accuracy of 15 meters, but we measured errors of 30+ meters in the past. Fortunately, we can obtain the reading from the drone’s altimeter, which is more accurate, and is already the value we want: height above the ground from where the drone took off.
We now have data to test our Pixel to GPS solution, and will post results soon.