When you look at a plant, it probably just looks green. When we look at it, however, we can see a lot more than that. We can see how much it’s photosynthesizing, whether it has any diseases, perhaps even whether it has a nutrient deficiency. This is possible through the use of sensors that measure light from outside of the visible spectrum, picking up signals that you can’t see unaided. In short, when it comes to plants, there’s more than meets the eye.

One important wavelength of light for plant phenotyping is the near-infrared (NIR) band. NIR is needed to calculate the all-important Normalized Difference Vegetation Index (NDVI). NDVI is a remarkably accurate predictor of where dense vegetation is located, and is often used when analyzing satellite or drone data. NDVI is fundamentally a measure of the difference in intensity between infrared and red light. This works because vegetation strongly absorbs red light and strongly reflects infrared. At a small scale, (such as an in-field robot) NDVI can be a much more accurate method of identifying vegetation than simply looking for green pixels.

Normal cameras, however, can’t measure NIR light directly, which makes calculating NDVI difficult. Many people instead use specialized multispectral cameras for this. These can produce very high-quality data (and can do a lot more than just calculating NDVI), but they don’t come cheap. Is there something in between?

IR Imaging for Cheap

Awhile ago, I learned that Raspberry Pi sells a version of their camera without the IR filter. IR cutoff filters are used on standard cameras, because the CMOS sensor is typically somewhat sensitive to IR. Without one, IR will show up in the red channel, making the resulting images look odd. Therefore, one might wonder as to the exact purpose of this camera variation. At it turns out, if you put a blue gel in front of the camera to block red and green light, you end up with only IR data in the red channel. (Most gels are conveniently transparent to IR wavelengths, because they’re designed to be placed in front of hot studio lights without melting.) This is useful, because it gives us an extremely cheap way to construct a decent IR camera.

The setup for capturing IR data with the Raspberry Pi camera. NIR is captured to the red channel, and visible red light is blocked by the blue gel.

The results of such photography look very strange if you view them as standard RGB images. In particular, any plants show up as a bright, reddish color instead of green. This is because the plants are reflecting a tremendous amount of IR light, which gets saved to the red channel in the final image.

A picture of some lettuce plants, taken with the IR camera and viewed as a normal RGB image.

MARS Camera 2.0

The second version of the MARS camera module.

Awhile ago, I did a revision of the MARS camera module, where I upgraded the internals to a Raspberry Pi 5. Other than providing more computational power to work with, a primary advantage of the Pi 5 is that it has two CSI camera connectors instead of one. This makes it possible to have two separate cameras in the same physical camera module. One use-case for such a feature is including an auxiliary camera in each module dedicated to IR, which is the design I eventually landed on.

My initial design, however, suffered from a minor issue where the two cameras were so close together that the large zoom lens on the RGB camera was blocking the view of the IR camera. Oops!

 

After a quick redesign and reprint, I got to the point where the cameras were not interfering with each-other. I then set about collecting some actual data with them. For this, I borrowed some of the lettuce plants from Donald’s controlled environment agriculture experiment. Initially, I just lined them up on a long table, and slowly drove the robot over them. I did find, however, that my table was, for some reason, extremely reflective in IR, so I eventually had to cover it with black cloth.

Of course, adding an auxiliary camera for the three second-generation camera modules adds a fair amount of extra data that needs to be dealt with. I found that I was able to save all of the data from all 9 cameras to a rosbag without any issues. However, the data storage requirements were non-trivial: 5 GB for just a few minutes of video! We might have to buy a bigger SSD.

Future Work

Moving forward, I’ll be working on collecting some actual data in the field with my IR cameras. I am currently working on a separate project that could benefit from a large IR dataset, and collecting it myself is now an option. Also, it turns out that calibrating 9 cameras concurrently is super fun. I’ll probably write an entire post on that, but the bottom line is that I had to change my calibration approach significantly, and it could still potentially be improved.

Ultimately, we’re planning to hand off MARS to a 3rd party this summer so that they can perform data collection on their own. This is… ambitious, as anyone who has made even a cursory examination of this blog well knows. There will doubtlessly be many improvements made to reliability and user experience between now and then, and realistically, it is likely that some of those will intersect with the cameras. I’m optimistic that adding all these new features did not also introduce a bunch of new problems. The historical evidence, however, is not on my side.

Categories:

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *