The MARS camera modules package Raspberry Pi HQ camera module along with a Raspberry Pi 4B in a single enclosure. They have their own custom software and are designed to be used with the MARS robot.

mars camera module

Using the Camera Modules

The camera module support Power Over Ethernet (POE), and as such, the only cable you need to connect is an Ethernet cable. The Ethernet switch on MARS provides the power for the cameras. Connect Ethernet cables between the cameras and the switch, and between the switch and the Jetson. If everything is connected correctly, the switch and cameras should power on automatically as soon as you turn on the robot.

MARS cameras connected to the ethernet switch

Troubleshooting

  • Check that the lights next to the Ethernet ports on the Pi cameras are on. If they are not on, the camera is either not powered, or has not booted up correctly.
  • Check that the Ethernet switch is powered. It should receive power though a barrel jack connected to the 19V supply from the MARS battery module.
  • Check that the Jetson is running. Because of the way ROS works, if the Jetson is power cycled without power-cycling the camera modules, the camera modules will refuse to connect to ROS when it boots up again. Fix this by power-cycling the camera modules. (This should happen automatically when you reboot the Jetson.)

Processing Camera Data

The cameras record compressed videos directly to the rosbag saved on the Jetson. To extract the videos, you will first need to obtain the rosbag. It is possible to do the extraction process on the Jetson itself, but you will probably see better performance by doing it on a different computer. Assuming you have the rosbag, you can extract the videos using the “extract videos” tool included with the camera software:

  1. Download the camera software from Github.
  2. Make sure you have Docker and docker-compose installed on your computer.
  3. Run the following command:
    ROS_UID=${UID} BAG_FILE="mars_sensors.bag" docker-compose -f docker-compose-video-extract.yml up

    Replace “mars_sensors.bag” with the path to your bag file. Note that Docker works internally by mounting your current directory inside the container. Therefore, you can only use relative paths here.

    Additional options can be passed to the script using the EXTRA_ARGS variable, for instance, EXTRA_ARGS=”-o my_video -d h264_nvv4l2dec”. See here for all available options.

  4. This should produce one video file for each camera in you current directory. It will also produce corresponding .txt files. These latter files contain the timestamp for each frame in the video, and allow you to synchronize the videos if you want to. Additionally, this script is configured to extract GPS data from the bag files as CSV files. (This feature can be customized using the “-t” option.)

Extracting Plot Videos

It’s possible to go further than merely extracting the raw video data from the cameras. If you are scanning a field arranged into plots, you can automatically extract videos that correspond to particular plots.

First, you will need a map of the field. This is generally derived from a UAV orthophoto. It is important that the georeferencing on the orthophoto be as accurate as possible! The use of either RTK GPS or GCPs is required.

Use QGIS (or similar software) to manually define the boundaries of the plots. You might find the rectangles, ovals, and diamonds command in QGIS helpful for this purpose. Save the vector layer containing the plot boundaries as an ESRI shapefile.

Example peanut field map with plot boundaries labeled.

Example peanut field map with plot boundaries labeled.

The script for extracting plots is currently located in this repository. It is called plot_video_extraction.py and takes the following inputs:

  • The raw video file
  • The associated timestamp file
  • The associated GPS CSV files for the front and back GPS
  • The plot boundary shapefile.

This might seem daunting, but you will note that the vast majority of these inputs are actually produced by the video extraction script. Here is an example invocation:

python /home/daniel/git/peanuts/plot_video_extraction.py -v peanut13_cam2.mp4 -o plot_videos -t peanut13_cam2_ts.txt -f peanut13_gps2_fix.csv -b peanut13_gps1_fix.csv -p ../plot_shapes/plot_boundaries.shp --camera-offset -1.0

Run the script with the “-h” flag to see documentation for what all the flags mean.

Troubleshooting

  • Make sure that your front and back GPS are actually correct. The “GPS 1” and “GPS 2” nomenclature is derived from the serial numbers on the Emlid GPS receivers, not from their relative positions on the robot, and the two of them can and do get swapped between runs.
  • The “–camera-offset” parameter controls how far in front of the front GPS the center of the camera FOV is located. Depending on the positioning of your cameras, you might have to adjust this. If you notice that your results appear to be shifted (for instance, the plot videos start in the middle of a plot and end in the middle of the next one), that would indicate that this value needs to be changed.

Imaging the Cameras

Imaging the camera SD card is a good idea before you make changes to the software so that you have a working configuration to restore to if things go wrong. To image the cameras, you will have to disassemble the camera module and remove the SD card.

The camera module can be disassembled by removing the four plastic thumb screws on the outside, and pulling apart the three layers. The Raspberry Pi can then be unscrewed from the middle layer to access the SD card. Be sure not to damage the ribbon cable between the Pi and camera during this process.

Once the SD card has been inserted into your computer, you can make an image of it using the following command (on Linux):

sudo dd if=/dev/sda bs=4M | pv | pigz > mars_camera_2023-14-04.img.gz

Replace /dev/sda with whatever device the SD card shows up as. You may have to install the “pv” and “pigz” tools for this to work.

Restoring an Image

To flash a saved image back to the SD card, you can use a similar command:

pigz -dc mars_camera_2023-04-20.img.gz | pv | sudo dd of=/dev/sda bs=4M

Once again, replace /dev/sda with whatever device the SD card shows up as. Make sure you have this correct!

Camera Configuration

There are two settings you might need to change after imaging the cameras. The first is the camera name in /etc/camera_module/config.yaml. This controls what name will be used for the ROS topics that the camera publishes. The second is the camera IP address, which is set in /etc/dhcpcd.conf. This IP should be on the same subnet as the Jetson and unique to each camera.

Rebuilding the Camera Code

This is definitely an “advanced” technique. Before you do this, I suggest that you image the camera so that you can restore it later if necessary.

SSH into the Raspberry Pi, and then run:

cd camera_catkin_ws

catkin_make -j2

The “-j2” part is intended for Pi’s that have limited RAM. If you have more RAM, you can try increasing the job count to use all the CPU cores for a faster build.

Updating the ROS Installation

You should almost never have to do this. With that out of the way, here’s how:

Because the Raspberry Pi doesn’t officially support ROS, it is installed from source. There is a folder in the home directory called “ros_catkin_ws”, where ROS is built. To rebuild and install it, run this command:

sudo src/catkin/bin/catkin_make_isolated -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/melodic -j2 --install

This is adapted from this guide. Once again, you can remove the “-j2” part if you have enough RAM.

Categories:

Tags:

Comments are closed