Skip to content
This repository has been archived by the owner on Nov 24, 2024. It is now read-only.

Note Detection Design Doc

josh edited this page Jul 17, 2024 · 1 revision

Design Doc

Purpose: the latency of our encoders is going to be better than the latency of a camera/NT so rather than using a PID loop for TX and TY, we can be cool and find the pose of the note and generate a trajectory towards it. Also is cool for logging since you can replay where the robot thinks the notes are at.

Breakdown

Screenshot 2024-07-16 at 8 08 51 PM

The first part is getting the distance to this note. Its just basic trig and is similar to the limelight docs for getting the distance to an apriltag. Screenshot 2024-07-16 at 8 09 04 PM

The next thing we need to do is transform the robotsPose to where the camera is. This contains a x and y coordinate and rotation to add to the robot pose. Honestly its a close enough distance where I don't think it matters but might as well be accurate. Screenshot 2024-07-16 at 8 09 21 PM

We then are going to need to transform the camToNote. We can start this our by creating a translation2d which is just a set of x and y coordinates. We define this by getting the distance to our note and the angle to our note. Along with x and y coordinates to add to the pose, we need a rotation. Since we want to the robot to face the note when it moves towards it, we can make the rotation the current rotation of the robot+the angle to the note. Screenshot 2024-07-16 at 8 09 38 PM

The pathfind function is pretty self explanatory. Most of the constraints are dialed down to something safe since I don't want it to generate impossible trajectories.

Clone this wiki locally