-
Notifications
You must be signed in to change notification settings - Fork 0
FRC Vision
When we talk about vision, we are referring to camera vision, which is using camera on the robot. Vision is used often in FRC. It can be used to help drivers navigate the field, and it can also be used during autonomous/teleop to track targets to score points. The first time the team used vision tracking in autonomous was in 2017 game Steamworks. Since then, we use vision tracking for autonomous.
Each year there is strips of RetroReflective tape near targets on the field. The tape acts as an object to track during the game. By using a green LED ring and putting it on the camera, we are able to detect the RetroReflective tape better. By using programs such as GRIP, we are able to track the target and use that information to move/align the robot autonomously
GRIP (Graphically Represented Image Processing) is an application for creating and deploying computer vision algorithms for FIRST Robots teams. GRIP makes the process of creating a vision tracking program easier by having a simple layout, which is mainly clicking, dragging and sliding values. After 2017, we moved over to GRIP to create our autonomous vision tracking. We were successfully able to create a program that lines up the robot to the target for the 2020 game Infinite Recharge.
Our 2017 vision code was successful but complex. We didn't use GRIP, instead we used straight up openCV, which is an open source computer vision library, and Python. To learn more about our 2017 vision code, you can click here
Network tables allow you to communicate to your roboRIO from an off board co-processor. More information on Network tables can be found here
Our 2020 Align Code was mainly written by our member, Adriana, using GRIP and Network tables. The code found the RetroReflective tape and aligned the robot to the center of the tape. This helped us shoot the powercells (balls) in the goal. This code was used in autonomous and teleop.
We used GRIP to generate code that isolated the RetroReflective tape in the camera feed. The generated code was then edited so we could take the image from the camera feed and use that to send information to the robot. This was done by getting putting a bounding box on the image and then sending that information into Network tables. This code did not run on the roboRIO, it ran on an off board co-processor which was a Raspberry Pi. More information on how this was done can be found here
The vision code can be found here and the changes made are in lines 74 to 114
To get the robot to align, the Network tables where sent over to the robot code. A simplified example can be found here.