Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running on Jetson Nano? Maxwell 128 cores? #3

Open
PaulKrush opened this issue Mar 22, 2019 · 16 comments
Open

Running on Jetson Nano? Maxwell 128 cores? #3

PaulKrush opened this issue Mar 22, 2019 · 16 comments

Comments

@PaulKrush
Copy link

PaulKrush commented Mar 22, 2019

Cool Repo! Do you think VINS-Fusion will run well on a Jetson Nano?

@pjrambo
Copy link
Owner

pjrambo commented Mar 26, 2019

Thanks! Haven't gotten a Jetson Nano yet :(

@ghost
Copy link

ghost commented Mar 28, 2019

Hi !
I am testing Jetson Nano.But I have encountered some problems,I manually installed opencv 3.4.5 with cuda,I compiled VINS-Fusion-gpu and I received some warnings.

[  1%] Linking CXX shared library /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libcamera_models.so
[ 19%] Built target camera_models
[ 39%] Built target Calibrations
[ 41%] Linking CXX shared library /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/liblibGeographiccc.so
[ 45%] Built target libGeographiccc
[ 46%] Linking CXX executable /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/global_fusion/global_fusion_node
[ 49%] Built target global_fusion_node
[ 50%] Linking CXX executable /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/loop_fusion/loop_fusion_node
/usr/bin/ld: warning: libopencv_imgproc.so.3.4, needed by /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libcamera_models.so, may conflict with libopencv_imgproc.so.3.2
/usr/bin/ld: warning: libopencv_core.so.3.4, needed by /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libcamera_models.so, may conflict with libopencv_core.so.3.2
/usr/bin/ld: warning: libopencv_imgcodecs.so.3.4, needed by /home/jetsonnano/OpenCV/opencv/build/lib/libopencv_sfm.so.3.4.5, may conflict with libopencv_imgcodecs.so.3.2
[ 68%] Built target loop_fusion_node
[ 69%] Linking CXX shared library /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libvins_lib.so
[ 91%] Built target vins_lib
[ 93%] Linking CXX executable /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/vins/kitti_odom_test
/usr/bin/ld: warning: libopencv_core.so.3.4, needed by /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libvins_lib.so, may conflict with libopencv_core.so.3.2
/usr/bin/ld: warning: libopencv_imgcodecs.so.3.4, needed by /home/jetsonnano/OpenCV/opencv/build/lib/libopencv_sfm.so.3.4.5, may conflict with libopencv_imgcodecs.so.3.2
[ 94%] Built target kitti_odom_test
[ 95%] Linking CXX executable /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/vins/vins_node
/usr/bin/ld: warning: libopencv_core.so.3.4, needed by /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libvins_lib.so, may conflict with libopencv_core.so.3.2
[ 97%] Built target vins_node
[ 98%] Linking CXX executable /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/vins/kitti_gps_test
/usr/bin/ld: warning: libopencv_core.so.3.4, needed by /home/jetsonnano/catkin_ws/VINS-Fusion/devel/lib/libvins_lib.so, may conflict with libopencv_core.so.3.2
/usr/bin/ld: warning: libopencv_imgcodecs.so.3.4, needed by /home/jetsonnano/OpenCV/opencv/build/lib/libopencv_sfm.so.3.4.5, may conflict with libopencv_imgcodecs.so.3.2
[100%] Built target kitti_gps_test

So when I run this package, I get an error saying that my opencv does not have cuda support.

@ghost
Copy link

ghost commented Mar 29, 2019

The problem has been solved, this is cv_bridge and opencv compatibility issues

@FrozenBurning
Copy link

The problem has been solved, this is cv_bridge and opencv compatibility issues

Could you tell me how you solve this problem? I face the same one, successful compiled but the package throws an error that the opencv in my ros does not have cuda support. But I DO have opencv in cuda version. Thanks!

@pjrambo
Copy link
Owner

pjrambo commented Mar 31, 2019

The problem has been solved, this is cv_bridge and opencv compatibility issues

Could you tell me how you solve this problem? I face the same one, successful compiled but the package throws an error that the opencv in my ros does not have cuda support. But I DO have opencv in cuda version. Thanks!

You need to find the cv_bridge source code from https://github.com/ros-perception/vision_opencv and clone it to your ros workspace. Then modify the cv_bridge CMakelist. Let the cv_bridge find your CUDA version opencv. Finally, catkin_make, problem solved

@FrozenBurning
Copy link

The problem has been solved, this is cv_bridge and opencv compatibility issues

Could you tell me how you solve this problem? I face the same one, successful compiled but the package throws an error that the opencv in my ros does not have cuda support. But I DO have opencv in cuda version. Thanks!

You need to find the cv_bridge source code from https://github.com/ros-perception/vision_opencv and clone it to your ros workspace. Then modify the cv_bridge CMakelist. Let the cv_bridge find your CUDA version opencv. Finally, catkin_make, problem solved

Gotcha! Thanks

@PaulKrush
Copy link
Author

This is great information! I should just go buy a Nano and try it myself.

@YuyingJin0111 Do you have it running? Does it work well. What kind of resources does it use? CPU cores and %? GPU%? Memory? Thanks!

@ghost
Copy link

ghost commented Apr 2, 2019

@PaulKrush On the euroc data, my measured performance is not good. There is a big delay, I have not solved this problem. But I implemented GPU acceleration on Vins-Mono and had a better real-time performance.

@pjrambo
Copy link
Owner

pjrambo commented Apr 2, 2019

@PaulKrush On the euroc data, my measured performance is not good. There is a big delay, I have not solved this problem. But I implemented GPU acceleration on Vins-Mono and had a better real-time performance.

Have you set your board to the max performance?
Could you please show your cpu usage rate and gpu usage rate?

@ghost
Copy link

ghost commented Apr 2, 2019

Screenshot from 2019-04-02 21-15-46

Screenshot from 2019-04-02 21-15-20

This is the result of my current test. I am still a newcomer to the jetson platform. I don't know how to look at the usage rate of the gpu.

Sudo: ~/tegrastats: command not found

@ghost
Copy link

ghost commented Apr 2, 2019

@pjrambo
Screenshot from 2019-04-02 21-40-21

Euroc bag is over.

@struggleforbetter
Copy link

I also encountered this problem.on my computer, not using GPU acceleration, but faster.I track the image both with and not with GPU acceleration simultaneously.this is code
`map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> FeatureTracker::trackImage(double _cur_time, const cv::Mat &_img, const cv::Mat &_img1)
{
static int count = 0;
count ++;
double sum_spend_time = 0.0;
static double norm_spend_time = 0.0;
static double gpu_spend_time = 0.0;
TicToc t_r;
cur_time = _cur_time;
cur_img = _img;
row = cur_img.rows;
col = cur_img.cols;
cv::Mat rightImg = _img1;
/*
{
cv::Ptrcv::CLAHE clahe = cv::createCLAHE(3.0, cv::Size(8, 8));
clahe->apply(cur_img, cur_img);
if(!rightImg.empty())
clahe->apply(rightImg, rightImg);
}
*/
cur_pts.clear();

if (prev_pts.size() > 0)
{
    vector<uchar> status;
         // 不使用GPU
        TicToc t_o;
        
        vector<float> err;
        if(hasPrediction)
        {
            cur_pts = predict_pts;
            cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 1, 
            cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
            
            int succ_num = 0;
            for (size_t i = 0; i < status.size(); i++)
            {
                if (status[i])
                    succ_num++;
            }
            if (succ_num < 10)
            cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 3);
        }
        else
            cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 3);
        // reverse check
        if(FLOW_BACK)
        {
            vector<uchar> reverse_status;
            vector<cv::Point2f> reverse_pts = prev_pts;
            cv::calcOpticalFlowPyrLK(cur_img, prev_img, cur_pts, reverse_pts, reverse_status, err, cv::Size(21, 21), 1, 
            cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
            //cv::calcOpticalFlowPyrLK(cur_img, prev_img, cur_pts, reverse_pts, reverse_status, err, cv::Size(21, 21), 3); 
            for(size_t i = 0; i < status.size(); i++)
            {
                if(status[i] && reverse_status[i] && distance(prev_pts[i], reverse_pts[i]) <= 0.5)
                {
                    status[i] = 1;
                }
                else
                    status[i] = 0;
            }
        }
        // printf("temporal optical flow costs: %fms\n", t_o.toc());
        norm_spend_time += t_o.toc();
        // 使用GPU
        cur_pts.clear();
        TicToc t_og;
        cv::cuda::GpuMat prev_gpu_img(prev_img);
        cv::cuda::GpuMat cur_gpu_img(cur_img);
        cv::cuda::GpuMat prev_gpu_pts(prev_pts);
        cv::cuda::GpuMat cur_gpu_pts(cur_pts);
        cv::cuda::GpuMat gpu_status;
        if(hasPrediction)
        {
            cur_gpu_pts = cv::cuda::GpuMat(predict_pts);
            cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
            cv::Size(21, 21), 1, 30, true);
            d_pyrLK_sparse->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);
            
            vector<cv::Point2f> tmp_cur_pts(cur_gpu_pts.cols);
            cur_gpu_pts.download(tmp_cur_pts);
            cur_pts = tmp_cur_pts;

            vector<uchar> tmp_status(gpu_status.cols);
            gpu_status.download(tmp_status);
            status = tmp_status;

            int succ_num = 0;
            for (size_t i = 0; i < tmp_status.size(); i++)
            {
                if (tmp_status[i])
                    succ_num++;
            }
            if (succ_num < 10)
            {
                cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
                cv::Size(21, 21), 3, 30, false);
                d_pyrLK_sparse->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);

                vector<cv::Point2f> tmp1_cur_pts(cur_gpu_pts.cols);
                cur_gpu_pts.download(tmp1_cur_pts);
                cur_pts = tmp1_cur_pts;

                vector<uchar> tmp1_status(gpu_status.cols);
                gpu_status.download(tmp1_status);
                status = tmp1_status;
            }
        }
        else
        {
            cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
            cv::Size(21, 21), 3, 30, false);
            d_pyrLK_sparse->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);

            vector<cv::Point2f> tmp1_cur_pts(cur_gpu_pts.cols);
            cur_gpu_pts.download(tmp1_cur_pts);
            cur_pts = tmp1_cur_pts;

            vector<uchar> tmp1_status(gpu_status.cols);
            gpu_status.download(tmp1_status);
            status = tmp1_status;
        }
        if(FLOW_BACK)
        {
            cv::cuda::GpuMat reverse_gpu_status;
            cv::cuda::GpuMat reverse_gpu_pts = prev_gpu_pts;
            cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
            cv::Size(21, 21), 1, 30, true);
            d_pyrLK_sparse->calc(cur_gpu_img, prev_gpu_img, cur_gpu_pts, reverse_gpu_pts, reverse_gpu_status);

            vector<cv::Point2f> reverse_pts(reverse_gpu_pts.cols);
            reverse_gpu_pts.download(reverse_pts);

            vector<uchar> reverse_status(reverse_gpu_status.cols);
            reverse_gpu_status.download(reverse_status);

            for(size_t i = 0; i < status.size(); i++)
            {
                if(status[i] && reverse_status[i] && distance(prev_pts[i], reverse_pts[i]) <= 0.5)
                {
                    status[i] = 1;
                }
                else
                    status[i] = 0;
            }
        }
        // printf("gpu temporal optical flow costs: %f ms\n",t_og.toc());
        gpu_spend_time += t_og.toc();

    for (int i = 0; i < int(cur_pts.size()); i++)
        if (status[i] && !inBorder(cur_pts[i]))
            status[i] = 0;
    reduceVector(prev_pts, status);
    reduceVector(cur_pts, status);
    reduceVector(ids, status);
    reduceVector(track_cnt, status);
    // ROS_DEBUG("temporal optical flow costs: %fms", t_o.toc());
    
    //printf("track cnt %d\n", (int)ids.size());
}

for (auto &n : track_cnt)
    n++;

if (1)
{
    //rejectWithF();
    ROS_DEBUG("set mask begins");
    TicToc t_m;
    setMask();
    // ROS_DEBUG("set mask costs %fms", t_m.toc());
    // printf("set mask costs %fms\n", t_m.toc());
    ROS_DEBUG("detect feature begins");
    
    int n_max_cnt = MAX_CNT - static_cast<int>(cur_pts.size());
    // if(!USE_GPU)
    {
        TicToc t_t;
        if (n_max_cnt > 0)
        {
            
            if(mask.empty())
                cout << "mask is empty " << endl;
            if (mask.type() != CV_8UC1)
                cout << "mask type wrong " << endl;
            cv::goodFeaturesToTrack(cur_img, n_pts, MAX_CNT - cur_pts.size(), 0.01, MIN_DIST, mask);
            // printf("good feature to track costs: %fms\n", t_t.toc());
            std::cout << "n_pts size: "<< n_pts.size()<<std::endl;
        }
        else
            n_pts.clear();
        // sum_n += n_pts.size();
        // printf("total point from non-gpu: %d\n",sum_n);
        // printf("detect feature costs no GPU: %fms", t_t.toc());
        norm_spend_time += t_t.toc();
    }
    
    // ROS_DEBUG("detect feature costs: %fms", t_t.toc());
    // printf("good feature to track costs: %fms\n", t_t.toc());
    // else
    {
        TicToc t_g;
        if (n_max_cnt > 0)
        {
            if(mask.empty())
                cout << "mask is empty " << endl;
            if (mask.type() != CV_8UC1)
                cout << "mask type wrong " << endl;
            
            cv::cuda::GpuMat cur_gpu_img(cur_img);
            cv::cuda::GpuMat d_prevPts;
            TicToc t_gg;
            cv::cuda::GpuMat gpu_mask(mask);
            // printf("gpumat cost: %fms\n",t_gg.toc());
            cv::Ptr<cv::cuda::CornersDetector> detector = cv::cuda::createGoodFeaturesToTrackDetector(cur_gpu_img.type(), MAX_CNT - cur_pts.size(), 0.01, MIN_DIST);
            // cout << "new gpu points: "<< MAX_CNT - cur_pts.size()<<endl;
            detector->detect(cur_gpu_img, d_prevPts, gpu_mask);
            // std::cout << "d_prevPts size: "<< d_prevPts.size()<<std::endl;
            if(!d_prevPts.empty())
                n_pts = cv::Mat_<cv::Point2f>(cv::Mat(d_prevPts));
            else
                n_pts.clear();
            // sum_n += n_pts.size();
            // printf("total point from gpu: %d\n",sum_n);
            
        }
        else 
            n_pts.clear();
        // printf("gpu good feature to track cost: %fms\n", t_g.toc());
        gpu_spend_time += t_g.toc();
    }

    ROS_DEBUG("add feature begins");
    TicToc t_a;
    addPoints();
    // ROS_DEBUG("selectFeature costs: %fms", t_a.toc());
    // printf("selectFeature costs: %fms\n", t_a.toc());
}

cur_un_pts = undistortedPts(cur_pts, m_camera[0]);
pts_velocity = ptsVelocity(ids, cur_un_pts, cur_un_pts_map, prev_un_pts_map);

if(!_img1.empty() && stereo_cam)
{
    ids_right.clear();
    cur_right_pts.clear();
    cur_un_right_pts.clear();
    right_pts_velocity.clear();
    cur_un_right_pts_map.clear();
    if(!cur_pts.empty())
    {
        //printf("stereo image; track feature on right image\n");
        
        vector<cv::Point2f> reverseLeftPts;
        vector<uchar> status, statusRightLeft;
        // if(!USE_GPU_ACC_FLOW)
        {
            TicToc t_check;
            vector<float> err;
            // cur left ---- cur right
            cv::calcOpticalFlowPyrLK(cur_img, rightImg, cur_pts, cur_right_pts, status, err, cv::Size(21, 21), 3);
            // reverse check cur right ---- cur left
            if(FLOW_BACK)
            {
                cv::calcOpticalFlowPyrLK(rightImg, cur_img, cur_right_pts, reverseLeftPts, statusRightLeft, err, cv::Size(21, 21), 3);
                for(size_t i = 0; i < status.size(); i++)
                {
                    if(status[i] && statusRightLeft[i] && inBorder(cur_right_pts[i]) && distance(cur_pts[i], reverseLeftPts[i]) <= 0.5)
                        status[i] = 1;
                    else
                        status[i] = 0;
                }
            }
            // printf("left right optical flow cost %fms\n",t_check.toc());
            norm_spend_time += t_check.toc();
        }
        // else
        {
            TicToc t_og1;
            cv::cuda::GpuMat cur_gpu_img(cur_img);
            cv::cuda::GpuMat right_gpu_Img(rightImg);
            cv::cuda::GpuMat cur_gpu_pts(cur_pts);
            cv::cuda::GpuMat cur_right_gpu_pts;
            cv::cuda::GpuMat gpu_status;
            cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
            cv::Size(21, 21), 3, 30, false);
            d_pyrLK_sparse->calc(cur_gpu_img, right_gpu_Img, cur_gpu_pts, cur_right_gpu_pts, gpu_status);

            vector<cv::Point2f> tmp_cur_right_pts(cur_right_gpu_pts.cols);
            cur_right_gpu_pts.download(tmp_cur_right_pts);
            cur_right_pts = tmp_cur_right_pts;

            vector<uchar> tmp_status(gpu_status.cols);
            gpu_status.download(tmp_status);
            status = tmp_status;

            if(FLOW_BACK)
            {   
                cv::cuda::GpuMat reverseLeft_gpu_Pts;
                cv::cuda::GpuMat status_gpu_RightLeft;
                cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
                cv::Size(21, 21), 3, 30, false);
                d_pyrLK_sparse->calc(right_gpu_Img, cur_gpu_img, cur_right_gpu_pts, reverseLeft_gpu_Pts, status_gpu_RightLeft);

                vector<cv::Point2f> tmp_reverseLeft_Pts(reverseLeft_gpu_Pts.cols);
                reverseLeft_gpu_Pts.download(tmp_reverseLeft_Pts);
                reverseLeftPts = tmp_reverseLeft_Pts;

                vector<uchar> tmp1_status(status_gpu_RightLeft.cols);
                status_gpu_RightLeft.download(tmp1_status);
                statusRightLeft = tmp1_status;
                for(size_t i = 0; i < status.size(); i++)
                {
                    if(status[i] && statusRightLeft[i] && inBorder(cur_right_pts[i]) && distance(cur_pts[i], reverseLeftPts[i]) <= 0.5)
                        status[i] = 1;
                    else
                        status[i] = 0;
                }
            }
            // printf("gpu left right optical flow cost %fms\n",t_og1.toc());
            gpu_spend_time += t_og1.toc();
        }
        ids_right = ids;
        reduceVector(cur_right_pts, status);
        reduceVector(ids_right, status);
        // only keep left-right pts
        /*
        reduceVector(cur_pts, status);
        reduceVector(ids, status);
        reduceVector(track_cnt, status);
        reduceVector(cur_un_pts, status);
        reduceVector(pts_velocity, status);
        */
        cur_un_right_pts = undistortedPts(cur_right_pts, m_camera[1]);
        right_pts_velocity = ptsVelocity(ids_right, cur_un_right_pts, cur_un_right_pts_map, prev_un_right_pts_map);
        
    }
    prev_un_right_pts_map = cur_un_right_pts_map;
}
if(SHOW_TRACK)
    drawTrack(cur_img, rightImg, ids, cur_pts, cur_right_pts, prevLeftPtsMap);

prev_img = cur_img;
prev_pts = cur_pts;
prev_un_pts = cur_un_pts;
prev_un_pts_map = cur_un_pts_map;
prev_time = cur_time;
hasPrediction = false;

prevLeftPtsMap.clear();
for(size_t i = 0; i < cur_pts.size(); i++)
    prevLeftPtsMap[ids[i]] = cur_pts[i];

map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> featureFrame;
for (size_t i = 0; i < ids.size(); i++)
{
    int feature_id = ids[i];
    double x, y ,z;
    x = cur_un_pts[i].x;
    y = cur_un_pts[i].y;
    z = 1;
    double p_u, p_v;
    p_u = cur_pts[i].x;
    p_v = cur_pts[i].y;
    int camera_id = 0;
    double velocity_x, velocity_y;
    velocity_x = pts_velocity[i].x;
    velocity_y = pts_velocity[i].y;

    Eigen::Matrix<double, 7, 1> xyz_uv_velocity;
    xyz_uv_velocity << x, y, z, p_u, p_v, velocity_x, velocity_y;
    featureFrame[feature_id].emplace_back(camera_id,  xyz_uv_velocity);
}

if (!_img1.empty() && stereo_cam)
{
    for (size_t i = 0; i < ids_right.size(); i++)
    {
        int feature_id = ids_right[i];
        double x, y ,z;
        x = cur_un_right_pts[i].x;
        y = cur_un_right_pts[i].y;
        z = 1;
        double p_u, p_v;
        p_u = cur_right_pts[i].x;
        p_v = cur_right_pts[i].y;
        int camera_id = 1;
        double velocity_x, velocity_y;
        velocity_x = right_pts_velocity[i].x;
        velocity_y = right_pts_velocity[i].y;

        Eigen::Matrix<double, 7, 1> xyz_uv_velocity;
        xyz_uv_velocity << x, y, z, p_u, p_v, velocity_x, velocity_y;
        featureFrame[feature_id].emplace_back(camera_id,  xyz_uv_velocity);
    }
}

//printf("feature track whole time %f\n", t_r.toc());
sum_spend_time += t_r.toc();
// ROS_WARN("feature track average spend time: %f",sum_spend_time/count);
printf("norm average spend time: %f\n",norm_spend_time/count);
printf("gpu average spend time: %f\n",gpu_spend_time/count);
return featureFrame;

}
`
when I dont run vins_fusion,gpu is
2019-04-12 21-25-20屏幕截图
then ,run
2019-04-12 21-24-13屏幕截图

@struggleforbetter
Copy link

struggleforbetter commented Apr 18, 2019

Turning on high performance on TX2 can achieve real-time, but I compared the GoodFeaturesToTrack of CPU and GPU and found that the feature points extracted by CPU are generally more than GPU. Does anyone compare this? @pjrambo

@PaulKrush
Copy link
Author

@pjrambo, Sorry, I have yet to try VINS-Fusion-gpu on the Jetson Nano.

@arjunskumar
Copy link

Hi, I managed to installed vins-fusion-gpu on Nvidia Jetson Nano,
I have documented my steps here vins-fusion-jetson-nano. I was able to run bag files smoothly. Tried with real sense D435i, was able to get good results.

@zrd1234
Copy link

zrd1234 commented Jul 28, 2022

@PaulKrush On the euroc data, my measured performance is not good. There is a big delay, I have not solved this problem. But I implemented GPU acceleration on Vins-Mono and had a better real-time performance.

Hi, thank you for your work, I put GPU acceleration of Vins-Fusion-GPU on Vins-Mono. However, the following error occurred when running. Do you know how to solve it?
9a8e30b7de495685eb7159ca9688409

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants