With these requirements in mind, Melody Liu (my UROP partner) and I first designed a system to track the deformation of a pointed finger tip while experimenting with new ways to package the sensor optics. After dozens of configurations we eventually settled on the pointed finger tip with a camera and mirror with illumination on the sides. This allowed us to view the gel pad from a lower angle thus making the finger thinner than previous versions. Ultimately, we found that because of the mechanical design of the finger tip, only flexion in one dimension (forward and back) was important and we were able to track it with two makers on the inside of the sensor pad. We also experimented with multiple manufacturing processes and coatings to increase durability of the gel pads and settled on protecting the pad with a loose-weave high-friction fabric.
Over the course of this project, I learned a lot about materials, optical flow, and vision techniques. Aside from the technical knowledge I gained, this was my first experience in cross-lab collaboration. It took time to get used to the different communication styles and paradigms in another lab - especially since our areas of expertise were so different. When our team expanded to include Si Yuan Dong from the Adelson group, we were able to make much faster headway on the project because we didn’t have to learn vision techniques from scratch to apply them. If we had approached the collaboration like that initially, we would have learned less but had a more functional product earlier. As always in academic settings and life, it is a balance between learning and producing. In this case, I believe we could have had a vision expert work directly with us earlier while still getting the benefit of learning.
One of our mantras in the Amazon Robotics Challenge 2017 was to increase our use of sensing to facilitate reactive grasping behaviors. We were attracted to the Gelsight sensor developed by Edward Adelson’s group at MIT because of its ability to resolve very small tactile features. Gelsight uses an elastomer pad with specular paint on the outer surface to “see” texture. When illuminated, contacts with the surface create light and dark areas for a camera to detect. However, the sensor was too bulky and fragile for use in our robotic system where the fingers must slide between objects and last at least hundreds of picking operations. We recognized that there were key features, namely 3d reconstruction, that we did not need in the sensor which simplified the optics significantly. On top of that, only one surface could be sensed with the elastomer gel but we wanted the ability to sense deformation of the finger tip to infer contact with the environment even on sides without the sensor gel.