RE: Build Flow - Docker CircleCI ROS ARM7

The last post briefly summarized the build flow for our system. That build flow remains the same, though we did switch from TravisCI to CircleCI. In this post, I'll offer more detail on the intricacies of the systems, as well as the "gotchas" that led me to implement them. Admittedly, ours is a unique setup. Because of that, I could find none of my solutions in one place. Hopefully, this serves as a top google result for a very specific case.

To summarize the steps:

  1. Base docker image has its own GitHub repo and auto-build in DockerHub on git push
  2. git push to project repo triggers CircleCI to build code
  3. CircleCI pushes new image with built code to another DockerHub repo

 

The Base Image - Gotcha #1

We wanted to make use of Docker's automated builds (AB) for our base image because having it ever ready and up-to-date is nice. Unfortunately, Docker's AB supports neither ARM kernels nor kernel module access (that means no binfmt_misc). Some kind internet people created a workaround that catches all shell calls and then runs them in an architecture emulator (QEMU). 

To implement this solution, first compile the altered QEMU source on your ARM device, and rename it qemu-arm-static. Then make a file called cross-build-start with the contents:

#!/usr/bin/qemu-arm-static /bin/sh
mv /bin/sh /bin/sh.real
cp /usr/bin/sh-shim /bin/sh

And cross-build-end with the contents:

#!/usr/bin/qemu-arm-static /bin/sh.real
mv /bin/sh.real /bin/sh

And sh-shim with the contents:

#!/usr/bin/qemu-arm-static /bin/sh.real
set -o errexit
cp /bin/sh.real /bin/sh
/bin/sh "$@"
cp /usr/bin/sh-shim /bin/sh

Add all these files to the repo with your Dockerfile. Finally, you can ignore most of the bits in my Dockerfile, except for these important lines:

ENV QEMU_EXECVE 1
COPY dockerutils/* /usr/bin/
RUN [ "cross-build-start" ]
... <your commands here> ...
RUN [ "cross-build-end" ]
ENTRYPOINT [ "/usr/bin/qemu-arm-static", "/usr/bin/env", "QEMU_EXECVE=1" ]
CMD [ "/bin/bash" ]

Note that the dockerutils directory contains cross-build-start/end, sh-shim, and qemu-arm-static.

Resources:

The Base Image - Gotcha #2

After step one, you might have a working automated DockerHub image. We, however, ran into another issue-- DockerHub was not able to establish a connection with the official ubuntu-ports repo server for several ARMHF packages. To resolve this, change the image's sources list to point toward http://ftp.tu-chemnitz.de/pub/linux/ubuntu-ports/, or an alternative mirror (though there aren't many). See sources.sh for a handy script to call from your Dockerfile.

Circle CI 

Vs. Travis CI

We quickly made our transition from Travis CI to Circle CI because Travis continued to stall on catkin builds (even short ones). Initially, we suspected our install step was too long for Travis, which is the reason why we moved that step to DockerHub's automated service. Additionally, Circle:

  • Offers 4 simultaneous builds for open source projects
  • Allows up to 32 simultaneous processes per build (lower across simultaneous builds)
  • Separates console output per command for easy debugging
  • Does not automatically pull git submodules

Tips, Tricks

To add support for git submodules, add section

checkout:
  post:
    - git submodule sync
    - git submodule update --init

To obscure your circle.yml environment variables, add them to an empty file on your local computer in the form `VAR=value`. Then, encrypt the file with a KEY of your choosing, using the command `openssl aes-256-cbc -e -in secret-env-plain -out secret-env-cipher -k $KEY`. Set KEY in your Circle CI project settings. 

 

Docker Travis Odroid (ARMHF) Buildflow

DevOps is never that interesting of a topic. The best outcome of a great solution is that it works and end users are none-the-wiser. Given that, I'll be brief with our setup and point you toward all the pieces as examples.

The idea behind our on-board system build flow is based in docker and follows these steps:

  1. The Dockerfile for the base image that goes on the Odroid XU4 lives in a separate repo from the main codebase. It auto-builds on DockerHub on push, installing all project dependencies (but not project code).
  2. A new commit to the Seanmatthews/rowboat1 repo (main codebase) triggers a Travis CI build.
  3. Travis CI pulls the base image from rowboat/rowboat-base-images:$VERSION, installs and build project code, then pushes a new image to rowboat/rowboat-tested-build:$TRAVIS_BRANCH$TRAVIS_BUILD_NUMBER.
  4. In order to get the latest project software, an onboard device would pull the latest image from rowboat/rowboat-tested-build.

NEW ROS GoPro Hero Package

To date, an official ROS package for the GoPro Hero does not exist, and I plan for this package (https://github.com/Seanmatthews/gopro_hero) to remedy that. Currently only tested with the GoPro Hero 4 Black, the nodes within allow:
* Access to camera settings
* Taking and retrieving still images (including multishot)
* Continuous live stream frame retrieval

We plan to attach the GoPro Hero 4 Black camera to the sub's external housing. A WiFi dongle (separate from the sub's router), plugged into the sub's vision computer, connects to the GoPro's network. That computer accesses the GoPro's stream for logging and processing. The network configuration here seems (and is) somewhat complex, but the GoPro requires that it is the LAN provider that allows devices to connect to it for control and streaming. Briefly, we considered bridging the primary sub network to the GoPro's network to avoid an additional network and network device, but the GoPro's network is fairly unstable and we don't want the sub relying on it in any capacity.

The GoPro Hero 4's live stream comes from a UDP stream (udp://10.5.5.9:8554). Any connection must command the GoPro to restart the stream after it connects, and then send stay alive messages every 2 to 2.5 seconds. A similar effect may be achieved by sending the stream restart command at the same interval, but this method produces a video blip on each restart command. 

The package is written as a collection of C++ nodes because 1) I'm more comfortable with the language, and 2) I initially investigated using ffmpeg to grab still frame images from the GoPro's UDP stream (though that proved troublesome), with the expectation that it could be tuned for minimal latency. While I still plan to explore this avenue, it turns out that OpenCV's VideoCapture class is perfectly capable of capturing frames from the live stream with only a small amount of latency. 

During development, I drew on a number of sources that paved the way for this node. Some even implemented their own GoPro control programs and APIs. Should you opt to write your own GoPro control program, here they are:
* https://github.com/Seanmatthews/goprowifihack
* https://www.reddit.com/r/gopro/comments/2md8hm/how_to_livestream_from_a_gopro_hero4/
* https://gist.github.com/3v1n0/38bcd4f7f0cb3c279bad#file-hero4-udp-keep-alive-send-py
* http://ffmpeg.org/

Hardware design & fabrication

While not much forward progress has been made in the past month, we remain committed to our goal of a spring/summer maiden voyage. Part of the delay is due to hardware fabrication and purchasing. Purchasing at the hobbyist level can be difficult for certain specialty items because the few companies that have a product don't have the sales resources to deal with small fish clients. These companies typically rely on large orders and aren't in the business of web stores and one-offs. Hobbyist fabrication is undoubtedly more prevalent nowadays, but still remains expensive on a small scale for specialized parts. The lesson is to make friends with skilled machinists.

Despite delays, we're executing on a carefully crafted plan for hardware fabrication. Let's take a look at that plan in bullet point format:

  1. Fabricate the acrylic viewing dome. This dome will be attached to the front of the sub, will hold the Point Grey computer vision camera used for visual navigation. We looked at two different manufacturers for this-- eMachineShop and CalPlastics. CalPlastics is the lower-cost option, despite a $400 minimum order on custom fabrications. They don't guarantee a uniform thickness throughout the dome, since they use a blowing process to form it, but for that price we were able to purchase three units. They also can't guarantee exact screw hole placement because they hand drill them. We compromised on a machined ring that they'd place overtop the dome's flange to guide the hand-drilled hole position. In hindsight, we should have first purchased the dome before the acrylic hull. CalPlastics sells domes for $30 each that comply to a different tube diameter, and acrylic tubes are available in an abundance of sizes.
  2. Buy the cast acrylic hull tube. We already did this through US Plastics, who sells cast acrylic tubing in five foot minimum lengths, which they'll cut for $1 per cut. We chose cast acrylic, despite the price increase over extruded, because cast acrylic is typically clearer and lacks the striations present in extruded materials. Also keep in mind that the edge pieces of the original five foot piece are not flat! We didn't know this, and ended up with two unusable one foot pieces (until we can cut them). Further headache came from US Plastic's refusal to refund 20% of the cost for a one foot section that came deeply scratched. They eventually gave in, but not until after several heated emails. As I mentioned above, this step should have come second, due to the wide availability of differently-sized acrylic tubes. What can I say-- hindsight is 20/20.
  3. Buy the external connectors. After a month+ wait, Seaconn sent to us quotes for a number of their underwater-mateable All-Wet connectors. They're pricey, to the tune of $1200 for connectors for six thrusters, two battery packs, six miscellaneous (lights, ballast tanks, etc), and a number of dummy connectors. Once we have these, we'll know how many, and what size, holes the aluminum endcap should have.
  4. Fabricate the endcap and flange adaptors. The flange adaptor forms a seal to the inside of the acrylic hull and provides a surface onto which the endcap, and the dome on the other end, can screw onto. All of these parts are designed, barring the number and size of holes in the flat aluminum endcap. Given the possibility that the screw holes in the acrylic dome are not placed properly, we have not yet started the fabrication process on these parts. Add to that the fact that eMachineShop may take up to 35 days to manufacture.
  5. Finishing the aluminum parts. eMachineShop charges an arm and leg for parts finishing. The most we can afford is a brushed finish. After that, we plan to use a local shop for hard anodizing. These processes together should smooth the pieces enough to form a solid seal with the o-rings.
  6. Pot the batteries. We'll place the batteries, two each, into anodized aluminum tubes filled with thermally-conductive potting compound. This will protect the batteries from water and other damage, as well as provide a uniform metal exterior that is easily attached to sub's external frame.
Encap sans connector holes

Encap sans connector holes

flange adaptor

flange adaptor

viewing dome

viewing dome

The [Un]Redeeming Qualities of LiFePO4 Batteries

LiFePO4 batteries are relatively new to the scene, introduced commercially at the turn of the century. They have a set of properties which are desirable for the sub. But as with anything, they have disadvantages as well. 

Two sets of four batteries to operate the sub for over an hour

Two sets of four batteries to operate the sub for over an hour

  • LiFePO4 batteries have a high energy density, but not as high as LiPO.
  • LiFePO4 batteries don't heat up and catch on fire when they're dinged.
  • LiFePO4 batteries half trouble self-balancing during charging. Once one cell in a series is full, charge will not flow to cells beyond that one. They need a custom charge management board solution.
  • LiFePO4 batteries can have a high discharge rate, making them ideal for the small thruster bursts needed to stabilize the sub in ocean currents.
  • LiFePO4 batteries cost more than LiPO and batteries with similar energy density.
  • LiFePO4 batteries have a long cycle life. As long as you're charging these babies correctly, you're going to see a very long lifetime out of them-- up to 2000 cycles.
  • LiFePO4 batteries do not leak much charge. Charge them and leave them (set it and forget it?).
  • Poor performance in low temperatures. Below 0 degrees Celsius, battery performance degrades specifically. Fortunately for us, the ocean does not reach that temperature at depth.

All in all, we like these qualities. Ocean temperatures reach ~4deg Celsius, so no problem there. We initially did not account for the inability of the batteries to self balance, but all that's needed to fix that is an additional charging board. The high discharge rate and energy density is ideal for controlling the Blue Robotics T200 thrusters (350 Watts max). They put a dent in my pocket, but hopefully they'll last a good long while.

Interfacing a Pololu Maestro

For interfacing the ESCs which control the thrusters, we bought two devices from Pololu-- a Micro Maestro (6-channel), and a Mini Maestro (12-channel). You can find most pertinent information about these devices on Pololu's website. Pololu does provide source code for their sample programs, but 1) it's written in C#, and 2) it's not ROSified. I wrote a C++ ROS version, and here are the takeaways:

Mini Maestros (devices 12 channels or more) have a special command to control all channels, with one message, simultaneously. Using this message, as opposed to the single target position message, saves milliseconds per message. This doesn't sound like much, but with control rates up to 333Hz, a noticeable latency will result. Use the Mini Maestro with command COMMAND_SET_ALL_TARGETS.

Only expose as much control to the user as is necessary. The Blue Robotics T200 thrusters do not report velocity, so the number that controls the thrusters' thrust becomes somewhat arbitrary. For our application, we define a speed control interface, per thruster, of a value between -100 (backward) and 100 (forward), inclusive. The values represent a nice, clean percentage of thrust. Additionally, they abstract away device-specific position values, leaving the path open for other PWM hub devices.

There will exist a bottleneck. Some link in the chain will prove to be the slowest, but make that predictable if you can. It's reasonable to assume that the Maestro interface software is going to be the bottleneck. The Afro ESC 30A accepts inputs at 1kHz. The Pololu Maestro accepts inputs at 333Hz. The Arm7 processor on the Odroid could undoubtedly handle 333Hz, but the physical update rate on the thruster velocities will depend somewhat upon the physical properties of the thruster motor. The software-- receiving, buffering, processing sensor data, and then sending thruster commands-- I suspect will operate around 50Hz.

Send a HOME command to the device once it's powered on with the PWM devices connected. The Maestro needs this to calibrate its current positions against the connected devices. Otherwise, you'll not be able to control those devices.

 

Hackathon Results

The four of us met this past Sunday to achieve two key goals:

  1. Operate the thrusters from end to end. That is, using an XBox controller attached to an operator laptop with GUI display, spin the thrusters by communicating with the Odroid SBC on the same LAN. The SBC controls the PWM board, which controls the ESC, which controls the thrusters.
  2. Choose the remaining internal components-- including connectors, router, (larger) DC converter-- and fit them into the 6" x 12" cylindrical hull.

We achieved both quite successfully:


Choosing O-Rings & The Challenger Disaster

On this momentous day in 1986, the Space Shuttle Challenger broke apart 73 seconds after launch, leading to the deaths of all seven crew members. A faulty o-ring, known about beforehand, caused a breach, allowing pressurized burning gas to reach the outside, which caused the overall structure to fail. On this day in 2016, I'm choosing the o-rings which will seal our endcaps against the water outside.

While there are many materials for specific o-ring purposes, two materials stand out as multipurpose and extremely common-- BUNA-N nitrile and silicon. BUNA-N is very cheap and very available. However, it has less resistance to water abrasion and petroleum products than silicon. Given that we're operating in the water surrounding NYC, it's safe to say we're going with the silicon. 

We'll place two o-rings on the flange adapter, which will seal against the inside of the hull. A third o-ring rests in the groove on the face of the flange adaptor, sealing against the endcap when it's screwed down. 

This is all I know about o-rings. I write this post with the hope that some o-ring expert, sleuthing the internet for o-ring related blog posts, will happen upon this one and tell me that my logic and decisions regarding o-rings are flawed and that disaster is imminent. Please, if you're out there, take me to o-ring school. Let this not serve as a "last words" before Rowboat1 careens to the bottom of the Diamond Reef.

Brief status update

Pumping out blog posts is tough when you're always coding, testing, and ordering supplies, so this one will be brief. We're reaching a point in software where we'll have full teleop control of the robot using an XBox 360 controller, but the sub requires three key electrical & hardware components before we submerge it: a power system, an expanded hull, and fancy connectors.

For the power system we're using four 3.2V LiFePo4 cells in series to produce the desired 12V to the hull. We'll embed two batteries each into two aluminum tubes and seal them with thermally-conductive potting compound. The combination should provide adequate heat sinking into ocean waters. We'll supply 12V unregulated to the thrusters and then regulated 5V to all the electronics. Two fuse boxes will separately protect the electronics and thrusters. We plan to use Seacon connectors for all connections into and out of the hull. 

Look at these batteries.&nbsp;look at them!

Look at these batteries. look at them!

For the expanded hull, we bought a 6" outer diameter, 12" long, 1/4" thick acrylic hull, as well as a 1/8" thick polycarbonate tube. The polycarbonate tube was admittedly an impulse buy, and it's becoming clear that acrylic has better clarity and better pressure resistance. Polycarbonate has better impact resistance, but I don't plan on getting shot at underwater. Other Sean adjusted the CAD files for the endcaps and flange connector pieces to comply with both new tube sizes. We plan to send those to a local shop for cutting the general shape and screw hole taps. Later, after we figure out the power and thruster connection situation, we'll drill those holes in the endcap ourselves. He also altered the part that cradles the hull on the acrylic baseboard so that it supports a 6" diameter hull. We'll laser cut this part out of acrylic. All that's left to resize is the electronics tray that goes inside the hull.

As I mentioned, we're using Seacon connectors. Fortunately, they have a connector which supports six different three-prong connectors in a semi-circle-- almost definitely designed for our thrusters. In addition to the thrusters, we'll need a port for power in, control tether, lights (2), ballast control (2), and however many more for miscellany that will undoubtable pop up. Beware, these connectors can get pricey.

In other big news, the team is getting together for a mini-hackathon during the last weekend of January and first week of February. We're on track for a maiden voyage in the Spring!

Stream to PC from GoPro Hero4 Black

One of our recent design improvements is the addition of a GoPro Hero4 Black. Admittedly, I bought it without much research because the thing looks friggin awesome. I mean, look at that beaut...

Pros:

  • Small, fits in the palm of your hand
  • 4k @~30fps, and more fps for less resolution
  • Modes out the wazoo, including low-light
  • Waterproof to ~140ft

Cons:

  • < 1hr battery life
  • Lack of public API
  • Pricey
  • No wired streaming
  • Noticeable latency

When it comes to streaming, GoPro provides only one option-- mobile devices. They've made it very clear that the primary use case, in their eyes, remains on-device recording and playback. However, they left open a REST interface for all commands and settings, almost undoubtedly intentionally. Find all of those commands here

Finally, the steps I followed to stream the Hero4 to my Mac. The specific commands will vary between OSes, but the general instructions are the same.

  1. Install ffmpeg & ffplay. On OS X, that's brew install ffmpeg --with-ffplay
  2. Install GoPro app on your mobile device, use it to setup wireless, and stream video to that device. That last part is important, as I believe it initializes something important.
  3. Connect to your GoPro's wireless network.
  4. Back out of all menus on the GoPro device. I've noticed the device may not be able to stream when it's in a menu.
  5. Open your computer's firewall to port 8554 for UDP traffic.
  6. In a terminal, ffplay udp://:8554
  7. In a web browser, go to http://10.5.5.9/gp/gpControl/execute?p1=gpStream&c1=restart. You should see the response {"stream" : "0"}.

When you flip over to the ffplay window, you should see a streaming video from your GoPro with very high latency (~6s). To remedy this to some extent, stop ffplay, and repeat steps four and five, but replace step four with ffplay -fflags nobuffer -f:v mpegts -probesize 8192 udp://:8554. This should bring the latency down to less than 2s.

Note that when the Hero4 Black first came out, GoPro release it with a different firmware, whose commands were different. If the above steps don't work for you, check your firmware version against the latest.

Bonus: Underwater!

Our use case for this camera was going to travel one of two paths. Either we'd simply attach it to the outside of the sub and press record before letting it loose, or we'd also stream video back to one of the computers for real-time image processing. Since the GoPro Hero4 Black acts as a wireless access point, we thought we could potentially connect to it, through ~1ft of water, from inside the sub. Turns out it works in the bathtub (i.e. not salt water) to just under a foot!

References:
  1. https://www.reddit.com/r/gopro/comments/2md8hm/how_to_livestream_from_a_gopro_hero4/cr1b193
  2. https://www.reddit.com/r/gopro/comments/2md8hm/how_to_livestream_from_a_gopro_hero4/coh94hi

Choosing a Computer Vision Camera

Know thy requirements

The most important part of picking a camera for a computer vision application is to know your requirements. That sounds common sense enough, but people don't typically feel the gravity of that decision until it's forced upon them in the choosing process. Additionally, it helps to be able to explain to other people what you need. Then, when you know what you need, you still must weigh dependent requirements against each other (cost vs. performance, for example). For us, the requirements look something like this, listed in order of importance:

  • Low-light sensitivity
  • Accessible API
  • Cost
  • Frames per second
  • USB3 interface
  • Other size/weight/power/operating temperature/field of view requirements

Low-light camera considerations

From here, I took my primary requirement and learned a bit (a lot) about how digital cameras work these days. To be perfectly honest, I haven't ever been into photography, and I know little about how any of it works, technically or stylistically, apart from a handful of fun facts. After a day of studying, I learned about a subset of digital camera properties that can affect its ability to operate in a low-light environment, namely the semi-deep sea. This shabby-looking graph will give you an idea of which wavelengths of the visible light spectrum penetrate deepest:

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; http://www.seafriends.org.nz/phgraph/water.htm

                              http://www.seafriends.org.nz/phgraph/water.htm

Interestingly, you can see from this graph why deeper ocean water looks blue, while coastal waters occasionally look green-- blue penetrates deepest in most of the ocean, but certain microbes/particulates present in shallow coastal waters allow green to penetrate deepest.

CCD vs CMOS

CCD and CMOS are the two most widely adopted technologies for the digital camera sensors which absorb light's photons for conversion into an electrical charge later interpreted by your camera. Without delving far too deep into the differences, know that CCD:

  • Used to be king because CMOS required more and complicated electronics
  • Dedicates more of its surface area to light photo absorption (potentially better light sensitivity)
  • Every pixel's charge exits through a limited number of output nodes

and CMOS:

  • Has a smaller footprint
  • Allows for more frames per second
  • Creates less noise
  • Costs less
  • Every pixel transmits its charge in parallel

I've undoubtedly conveyed a piece of information improperly here, but that's the gist of it. And at first glance, you'd think "better light sensitivity, problem solved." But wait, there's more...

Pixel Size

Pixel size is just that-- the size of each pixel in µm. It has a non-linear effect on the photon absorption ability of the sensor.

Quantum efficiency

Quantum efficiency is somewhat related to pixel size, in that it represents the percentage of photons, which fall onto the sensor, that the sensor converts to an electric charge. Essentially, this attribute is the full light detection rate for an individual sensor. 

Temporal dark noise & Dynamic range

After a sensor absorbs a photon and converts it to an electrical charge, the charge is inserted in a well until digitization, or measurement, of the charge begins. The error in that measurement is called temporal dark noise, Noise in the popular signal-to-noise ration measurement comprises temporal dark noise as well as shot noise, which comes solely from the nature of the light.

Absolute sensitivity threshold

This is simply the number of photons needed to obtain a signal equivalent to the noise observed by the camera. It represents most directly the minimum amount of light needed to observe any meaningful signal in the resultant image.

Signal & Noise

Signal then, may be calculated with the equation:

Light Density x (Pixel Size)^2 x Quantum efficiency

If we take Light Density as variable, then we can represent a sensor's signal as a line whose slope denotes the signal at different light levels. Following that, noise may be calculated with the equation:

Noise = SQRT[ (Temporal Dark Noise)^2 + (Shot Noise)^2 ]

Given that we can't control the amount of shot noise, for the purpose of evaluating a sensor, take Noise = Temporal Dark Noise.

Why can't I just use a GoPro?

Funny thing about that-- we are, but not for our computer vision camera. GoPro has been extremely successful in bring durable, high performance recording equipment to the masses. From what I read, they perform well in lower-light conditions as well (we'll see). Unfortunately, I can't find many specs about its sensors, and it has no API or wired streaming capability. Still, reliable down to 40m, I conjecture that our new (GoPro Hero4 Black)[https://shop.gopro.com/hero4/hero4-black/CHDHX-401.html] shall perform well as a cinematic camera.

Where we're left

Considering the above characteristics affecting low-light performance, we can see the ultimate importance of pixel size, but also a clean digitization process (for low noise). However, there might also exist a sensor with a fantastic quantum efficiency that offsets a smaller pixel. There might also exist other digital camera sensor attributes, important to low-light performance, which I've failed to cover here.

After this research, I'm left with four sensors from which to choose: Sony IMX 174/249/250/252. Looking at finished products, from three separate manufacturers, using these sensors, I briefly profiles each:

  • IMX174 - $1000+, very high FPS, 1900x1200, good QE, standard noise
  • IMX249 - ~$500, low FPS, 1900x1200, good QE, standard noise
  • IMX250 - $1000+, standard FPS, 2048x2048, standard QE, low noise
  • IMX252 - $1000+, high FPS, 2048x1536, standard QE, low noise

The IMX174 and IMX249 look almost exactly the same, save FPS and price. I'm leaning toward those two more than the others. They also have a fortunate QE quirk where they absorb 74% of photons in the green wavelength (525nm). Given that green penetrates deeper into the ocean, and maybe even with the help of some green LEDs of the correct wavelength, they could pleasantly surprise in regard to performance in our application!

Elevator Pitch

I often find, in explaining our purpose and our project to people, that their eyes glaze over at a certain point in the conversation. Most people lack deep technical expertise, and even if they ask about how a component of the AUV operates, they don't want to hear a technical explanation. They want a "technical enough" explanation encased in engrossing language. On top of that, they don't want to listen to you talk for more than 30 seconds. Here goes:

The Diamond Reef Explorers are a Brooklyn-based group tasked with creating an autonomous underwater vehicle to explore the bottom of the ocean around NYC. On command, the robot ventures on its own to the ocean floor and records high definition video of the undersea environment. It accomplishes this with three specialized computers for vision, navigation, and safety monitoring. The vision computer controls high-level behavior, as well as processes visual data for environment mapping and obstacle detection. The navigation computer controls the thrusters, and is tasked with vehicle stabilization in ocean currents and local positioning. The safety computer controls the ballast system, monitors the hardware for malfunctions, such as leaks, and monitors the software for critical errors. Development has begun on our first model, which we're on track to have in the water this Spring.

VirtualBox XBox360 ROS Control Over Network

This post may be considered a continuation of another post, "Use an Xbox controller over Vagrant with Mac". In fact, the the post talks nothing about setting up the Xbox controller with the VM. What I've built on top of that is the ability to communicate with the controller from that VM with a ROS master setup across a wired switch, and all the problems that came along with that.

The Hardware

  • Xbox360 wired controller
  • Odroid C1+
  • TP-LINK TL-SG1005D switch
  • Ugreen 20258 USB 3.0 to ethernet adaptor
  • Mac

The Hardware Setup

Both the Mac and Odroid plug into the switch, and the controller plugs into the Mac. This setup is meant to simulate the situation where the switch resides inside the AUV, connecting the three computers, as well as providing an external ethernet port. In order to teleoperate the AUV, one would tether it to their laptop. The Xbox360 controller, connected to the laptop, is just a nice control interface.

The ethernet adaptor requires drivers, and possibly a reboot, but then it connected fine. Make sure to follow the manufacturer install instructions because there are a few nuances. Setting the connection for the device is not completely straightforward either. When you find the device in your network settings, it will give you the option of four IPv4 connection modes--  Using DHCP, Using DHCP with manual address, Using BootP, and Manually. I chose "Manually", and for ease of switching between networks, I put the device on the same subnet as my wireless network, with subnet mask 255.255.255.0. That might not be the best idea from a networking standpoint, but it lets me give the Odroid access to the internet, when necessary. The same goes for the VM. Perhaps related to this (but I don't think so), the adaptor loses its manual IP address every so often. I suspect this has something to do with OS X keeping the adapter driver daemon alive in the background when it's not active. When this occurs, I set the device's IP to a different address in network settings, prompting the adaptor driver to wake up and assign the new IP.

ROS Over Multiple Machines

As I mentioned before, the software setup for controlling an Xbox360 controller from a Vagrant-managed VirtualBox VM can be found in this post.

By default, VirtualBox VMs access the external network via Network Address Translation (NAT). This method essentially creates a private, routed network within the VM, albeit inaccessible from the rest of the world, even the host computer except by default pathways. Additionally, vagrant establishes an exception for SSH connections. Without getting into all the options (read more about them here), you want to change your VMs networking mode to Bridged, by adding the following to your Vagrantfile:

config.vm.network :public_network, ip: "<chosen VM IP>", :public_network => "<network interface>"

For example, I use:

config.vm.network :public_network, ip: "192.168.1.99", :public_network => "en5: AX88179 USB 3.0 to Gigabit Ethernet"

You could leave off everything after the comma. In that case, vagrant will prompt you for the network interface when you vagrant reload. In Bridged network mode, the VM attaches itself to a device driver for a network interface, using it to communicate with any networks accessible to that interface. Once this connection is set, vagrant reload your machine, and follow the connectivity test instructions for ROS across multiple machines to ensure that you do indeed have that functionality.

Note that before you finally turn everything on, you must tweak a few odds and ends:

  • Disconnect your Xbox360 controller and don't reconnect it until you're SSHed into your VM
  • On your remote machine:
    • Run roscore and take note of the "ROS_MASTER_URI=..." line. You'll set this environment variable, exactly as you see it here, for both machines communicating.
    • export ROS_IP=<this machine IP>
    • export ROS_MASTER_URI=http://<this machine IP>:<roscore port>
  • Once inside your VM, set the following environment variables. Take note that the ROS_IP IP is that of the VM, while the IP and port for the ROS_MASTER_URI comes from the startup output from the machine on which you ran `roscore`.
    • export ROS_IP=<your VM IP>
    • export ROS_MASTER_URI=http://<roscore machine IP>:<roscore port>

Once you start your roscore on your ROS master machine, start the joystick node on the other. rostopic echo /joy on the ROS master to see messages come back from the controller while you fiddle with it.


The Hull Has Arrived!

At the beginning of the project, I had dreams of creating a custom hull. I still do, but I quickly realized such an endeavor, with little to no mechanical experience, would set the project back further than I wish. All that being said, I purchased an ROV.

"No, it's not cheating," I screamed at my internal dialogue. While teleoperation will be an important part of our AUV, autonomy remains the main goal. On top of that, I plan to keep neither the computer that comes with the vehicle, nor its software. 

Why buy the ROV then? Well:

  • It's a prefab hull that I don't need to design or order
  • It came with six of the thrusters I planned to order
  • The lead time on the ROV was shorter than ordering the thrusters themselves
  • This particular design has "fins" to which to attach sensors
  • The ROV cost only slightly more than the sum of the thrusters
  • I estimate I have reduced the time to maiden voyage by at least 4 months
  • We always have the option to reuse hardware for future, custom-hulled models

Rather than drone on any longer, here it is-- the BlueRobotics BlueROV:

IMG_1496.jpg

Positioning & Navigation

Whenever I tell anyone, technical or otherwise, about our group's goal to create an autonomous underwater vehicle that will explores the waters around NYC, one line of thought always emerges: Who is controlling the robot? How does it know where it is?

To the first question, no one controls it-- it's autonomous. Controlling an underwater robot is actually rather difficult. You need either a tether (typically an ethernet line) or an extremely expensive military-grade equipment, which has its own difficulties. Radio waves do not travel through water very well. Very Low Frequency (VLF, 3-30kHz) radio waves might travel 20 meters, and Extremely Low Frequency (ELF, 3-300Hz) radio waves might travel several hundred meters. Keep in mind, typical WiFi operates at either 2.4GHz or 5GHz frequency.

To the second question, there actually exist some adequate solutions. Unfortunately, most of them are too expensive for us (hello, sponsors!). The best and most expensive is a Doppler Velocity Log (DVL). The device contains several, angled sonar devices that send out signals, and then measure the time of flight and angle of return. From this, the DVL estimates its velocity, direction of travel, and/or position. If you maintain an acceptable depth relative to the ground, and if you have ~$25k to burn, this solution works well enough. Another expensive solution is a high precision Fiber Optic Gyro (FOG) Inertial Measurement Unit (IMU). An IMU typically contains a gyro, an accelerometer, and a magnetometer, and it is used primarily to measure orientation. With highly accurate IMUs, you may attempt to integrate over the measured acceleration in order to obtain distance, but that double integration only exponentiates the errors. Then the FOG + compass can tell you along which axis that distance was traveled. Some high-grade FOG IMUs have such a low amount of error that they can provide these measurements accurately enough, but they cost upwards of $8k. 

That leaves us at the solution which we decided to pursue-- high frequency acoustic triangulation. Black box acoustic locator beacons, which operate at 37.5 kHz can be detected up to 3 miles away in good conditions. We plan to hang such an acoustic beacon, subsurface, from a floating device with a GPS attached to it. On the AUV, we'll attach four hydrophones (essentially, specialized microphones) at odd corners. All hydrophones will listen for the acoustic beacon's signal, measure the time at which each hydrophone receives the signal, and then determine the location of the acoustic beacon. With that location, the AUV always knows its position, relative to the acoustic beacon. Then, later, the GPS + acoustic beacon + timestamp data may be used to position the AUV globally. This solution will cost approximately $5k. 

There it is. Positioning ain't easy, and positioning ain't cheap. It is, by far, the most expensive part of this project. However, the ability of a robot to position itself frees it to explore untethered, bounded only by power.

Use an Xbox360 controller with Vagrant on Mac

More than one person on the internet wanted to use a wired Xbox360 controller in a Vagrant VM, but most of the instructions I found are incomplete, or they flat out don't work. I found the following instructions to work swimmingly.

Step 1 - Install VirtualBox

Install VirtualBox and the corresponding version of VirtualBox Extension Pack. Both can be found here: https://www.virtualbox.org/wiki/Downloads

Step 2 - Install controller

Install version >= 0.15 of 360Controller. It makes your wired Xbox360 controller work on Mac OS X. Find that here: https://github.com/360Controller/360Controller/releases

Optionally, after that install, install Controllers Lite from the app store. Plug in your controller and then open the app. The app should reflect your button presses and joystick movements. Then, unplug the controller.

Step 3 - Controller info

On the success of the previous steps, go to Apple+Option > System Information > USB > Controller and record the Product ID and Vendor ID attributes.

Step 4 - Vagrantfile

Add these lines to the appropriate position in your Vagrantfile:

vb.customize ["modifyvm", :id, "--usb", "on"]
vb.customize ["modifyvm", :id, "--usbehci", "on"]
vb.customize ["usbfilter", "add", "0",
  "--target", :id,
  "--name", "xbox360",
  "--vendorid", “<your vendor id>“,
  "--productid", “<your product id>“]

Save the file, then `vagrant reload`. After the machine reboots, `vagrant ssh` into the VM.

Step 5 - Install drivers

  • modprobe joydev
  • sudo apt-get install joystick xboxdrv
  • Plug in your controller
  • At this point, running lsusb should display your controller.
  • xboxdrv
  • After some positive stdout about finding the controller, any manual input to the controller will produce representative data in the terminal.
  • Kill xboxdrv

Step 6 - Start on boot

  • sudoedit /etc/init/xboxdrv.conf
  • Add the contents:
start on filesystem
exec xboxdrv -D
expect fork
  • echo "joydev" | sudo tee --append /etc/modules

The longest journey

When I started hatching this project in September 2015, I swore I wouldn't fool myself about how much time, money, work, and disappointment I would have to endure to complete this project. It's ambitious, but I believe that, with my professional robotics background and with the abilities of all the great people that will step forward to contribute, we can successfully deploy an AUV to explore the ocean floor around NYC.

To this end, I've brooded over the shortest and most fulfilling path to successful deployment that doesn't break my wallet. From the brooding emerged some key decisions:

  1. I purchased a BlueROV from Blue Robotics. I had planned to buy thrusters from the company regardless, but now with the addition of a basic hull, we need not worry about mechanical designs and trial-and-error fabrication processes. I'd love to have a custom, more fluidly-dynamic hull, but it's not viable for someone with my limited mechanical knowledge to design that.
  2. The AUV needs a positioning system. A Doppler Velocity Log (DVL) is too expensive ($25k+). While IMUs might afford it an acceptable dead-reckoned position, the models capable of that are military grade and have military price tags. A hydrophone array which triangulates its position to a subsurface acoustic pinger will costs ~$5000 (that's good), and will provide an accurate position relative to the subsurface pinger. Such behavior is fine for our use cases.
  3. I purchased three single board computers from ODroid, which will operate three different on-board systems-- High-level behavior + image processing, navigation, and safety. ODroid makes nice SBCs for which they've created Ubuntu server ISOs.

All other major decision remain in flux, and the not-knowing is always exciting.