Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gripper orientation inputs #386

Open
ahundt opened this issue Jan 8, 2018 · 0 comments
Open

Gripper orientation inputs #386

ahundt opened this issue Jan 8, 2018 · 0 comments
Assignees

Comments

@ahundt
Copy link
Member

ahundt commented Jan 8, 2018

We need to evaluate a number of inputs to determine what is the best possible grasp command format.

We are inputting an angle of the form sin(theta), cos(theta), and the value of the angle theta can take the following forms:

  1. current gripper location to final gripper location
    • we have trained with this setting, not good results yet but may be due to a bug elsewhere)?
  2. Angle between robot base and final gripper orientation
  3. both 1 and 2
  4. camera frame to final gripper frame
    • Problem: The camera is not at a straightforward square orientation relative to the ground
    • Should we include more angles like pitch and yaw
  5. delta depth + quaterion from camera frame
  6. delta depth + rpy from camera frame
  7. No theta at all
  8. 3 class problem where the 3 classes are the 3 different possible gripper heights.
@ahundt ahundt changed the title Grasp pixel wise prediction - evaluate different orientation inputs Gripper orientation inputs Jan 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants