Who writes here?
I am Ramsri , a recent masters graduate who works in the field of computer vision , image processing and machine learning.
i left a comment on your youtube account. i have a problem doing background removal, so it would be good if you can share your little .exe or maybe tell more how you did it.
I am using “RGBDemo” which is based on “Nestk” library to do the Virtual background video* that you saw. RGBDemo is a toolkit which integrates openNI/openKinect framework with Opencv, Point Cloud Library & QT. If you want to know more about it please visit http://labs.manctl.com/rgbdemo/
In my video* I have used depth filtering using two trackbars , one for minimum depth and one for maximum depth which I can adjust on the fly. So , I haven’t used any background subtraction as such. If you want you can use “adaptive thresholding” function from opencv on the depth image and separate background and foreground. Here is my source code : https://www.dropbox.com/s/4tgrjdagcz3pgtu/perfectly_Overlayed_image.cpp
The background image used : https://www.dropbox.com/s/tmkojd84ezd3fk7/sunset2.jpg
If you haven’t used RGBDemo previously and get stuck anywhere I would be glad to help you out.
*Video that is referred in this post : http://www.youtube.com/watch?v=rNc6pIK2h2M
ramsri i need to ask some question from you….m confused in viola jones …..m doing it as my year project m bit confused kindly direct me at email@example.com
sir i wanna matlab coding for finding the intima media thickness please help me my email id:firstname.lastname@example.org
thanx very much for the reply. i havent compile yet rgbdemo, but i will be using this link:
for help. i do have most of libs and drivers, tho. although process seems quite lengthy.
my further question to you would be, after seeing the code, is it possible to put the video as background and would that be delaying the scene too much?
again, many thanx
You can use OpenCV and keep a video in place of a static background. That would perfectly work. Yeah, follow the tutorial of Razor Vision and you should be able to compile it. I shall be glad to help you with whatever I can in case if you need any help.
This could be of good help to you.
it was very helpful.
could you be please so kind and email me directly?
i have some questions related to my own code.
I have emailed you. Let me know if there is any problem.
This is Chris. I’ve watched most of your youtube videos about the Kinect, PCL and RGBDemo which are just awesome. Especially, there are two projects: Kinect segmented and colored pointcloud real time and PCL object and Hand Tracking which provide me with great inspiration.
As I said on your youtube channel, I am studying and experimenting with Kinect, OpenNi, PCL and RGBDemo. I find that these two demos are really brilliant. I think they are really helpful to me during my learning process. Hence, could you please share me with the source codes and exe files of these two fantastic demos? I will really appreciate it!!! Cheers!!
Hey Chris ,
Thank you ! I would be happy to share the source codes with you. I can send them to your email email@example.com if you want it to be sent to that email. Else drop me a mail on firstname.lastname@example.org. I shall send them over this weekend. Feel free to remind me just in case if I forget. Have you configured RGBDemo by the way?
Thank you for your reply. Of course, you can send me the source codes to my email address.
And I think I will need to ask you some questions about how to compile them afterwards.
Yes, I’ve configured the RGBDemo and PCL successfully, and tested a few samples recently.
If you have any questions, I’d like to share the experience of configuration with you.
I am a postgraduate student in UK, and I am happy to discuss questions with people.
By the way, your long board is cool.
Thanks again 🙂 . If you have configured RGBDemo already it shouldn’t be much of a problem to compile my source files. I would be happy to help you out with whatever I know. I’ve already shared the code files with you on gmail. Go through them and we can discuss about compilation. Good to know you !
I’ve got the files, and I will go through them as soon as possible. So we can discuss about them afterwards. It’s great to know you too. And thank you so much for the sharing. Talk to you later.
How are you doing? I’ve complied the code of successfully. However, there are several problems during the debugging process. How to set up the position of the Kinect camera. Should the Kinect be placed on a tripod? Because when I held the Kinect using my hand and varied the position of the Kinect, the debug failed.
Hi Chris , I am doing good. Good to know that you could compile it.
I had set up kinect top mounted on a table. It was fixed there. Also I believe I have put some x,y,z thresholds for filtering and some other processes depending on my setup. You may need to change them according to your setup. Debug in steps. 1) Plane extracted and removed 2) clustering is working fine to cluster the objects after removing plane. 3) Coloring the clustered objects.
Also see if other demos in RGBDemo are working fine or not , bcoz sometimes although I could compile the source code of RGBDemo it so happened that the demos didn’t run bcoz of incompatible 32/64 bit mix of visual studio , openni etc.
Hope that helps!
Most of the demos of RGBDemo are working good on my computer, all of the software which are used to complied are 32 bit.
This one (clustering_realtime_fast_and_efficient_Feb29.cpp) is the source code, right?
Let me tell you about the compile steps, because I am not sure if they are valid.
(1) Put the source code in the samples file of the nestk file, and change the CMakeLists
(2)Configure and generate the nestk file in CMake
(3)From the nestk.sln, build the source code
Are these steps right?
Each time the demo could be launched, however there was just the Just_objects window with coordinate, there wasn’t the picture window.
And after it reminds me that: Expression: vector subscript out of range.
Something wrong with the compiling steps?
I really need your help, I’ve tried several times, it still doesn’t work.
I compile the project separately . I don’t know if the way you follow (code in samples)will cause any trouble. Specifically I follow http://nicolas.burrus.name/index.php/Research/KinectUseNestk
Don’t worry if just in case you don’t understand from the above link. I shall share my compilation folder later in the evening and will explain you the steps.
I am afraid I will need your compilation folder, your explanation as well. I’ll wait for you.
I shall share the folder with you in 30 mins. You may ask any compilation questions once you get it.
Thank you so much! Could you share another compilation folder (PCL object and Hand Tracking) with me too? I will really appreciate it! Cheers!
Hi Chris, I’ve shared the folder with you. Mail me back on my gmail so that I can explain the compilation.
Could you please explain to me how can i set up a new project in VS 2010 for using OpenCV (possibly 2.4.3), PCL and OpenKinect drivers?
I’ll appreciate any help.
Install OpenCV as a binary. Then use the cmake file from this post to compile https://ramsrigoutham.com/category/technology/pcl-and-opencv/.
If you need more explanation I can help you out through email /any medium.
rasri …. m doing my year project on basis of adaboost but m confused while we calculate integral image of training data that is positive and negative image or we calculate integral image of any other image give for testing whether its face or not…..secondly whats this threshold how to decide it??? kindly inform me as soon as possible at email@example.com thanku
Hi Saba , I dropped a mail to you. You may ask your doubts there.
you haven,t send me any mail……ramsri how can i ask question now 😦 😦
Was breaking my head for 3 days on installing openni,nite and sensor kinect and came across your blog!! and read your “about me” page! the greatest thing that impressed me was your philosophy!!
Anyways – Great going!! 🙂
Thanks Nazeer ! 🙂
Hello Ramsri, love your blog!
I watched your video on youtube about Face Detection with Viola Jones. Something is not clear for me, if you could help answer my question I would be really grateful. I have been looking in haarcascade_frontalface_alt2.xml from OpenCV and I don’t understand the meaning of “root node”and “node 0”. Does the root node has something special? Why are 2 nodes and not more… or less?
Thanks in advance 🙂
It has been a while so I don’t remember exactly.
But as far as my understanding goes, It enters each stage and finds the integral sum of the rectangle given in . If the sum greater than threshold it chooses left node and otherwise right node and adds to the sum accordingly. ( This is the reason you have two nodes every time).
At the end of each stage the total accumulated sum is compared to stage threshold. If it exceeds the threshold of that stage it moves on to next stage in the cascade else it is rejected as not a face.
Hi Ramsri, Your explanation on Viola and Jones method for face detection helped me a lot. Do you have any material to share on facial expression recognition in matlab. Thanks in advance.
Sorry I don’t happen to have any material for facial expression.
No Problem. Thanx anyways 🙂
I am working on my term project which can able to 3D scanning with kinect. I found your simple app about kinect, plc and openCv so how can ı save kinect xyz point of same object save to ply file.
can you help me for this part.
I am working on my project for capturing reverse sweep video for large size RLC circuit digram in opencv where the video as to convert into mosaic image by stiching keyframes and represent as single image
I am trying to do panorama desktop application in java. First of all i convert your c++ code to java in title “Panorama – Image Stitching in OpenCV” succesfully. However, there are seams in the stitched image. Could you help me with this please. I can send you the image if you wonder how it looks.
Thanks in advance. I’m looking forward to hearing from you soon.
I’m a student from Istanbul Technical University. It was really please to read about your works in your personal webpage. I have a request to you. I have been working on my some project right now. I have been trying to do something already you have already done. I need your help regarding the source codes. I’m attaching the youtube video here about the project.
Here is the project regarding PCL and kinect:
With kind regards and thanks
HI Abdullah ,
I should have the source codes for the videos that I have uploaded in the youtube description itself. Please check them.
I want to congratulate you for the Viola-Jones and tracking video. Nice work!
Could you share the slides in pdf or ppt?
I should have the ppt slides for the videos that I have uploaded in the youtube description itself. Please check them.
I have watched your project video on youtube called “Kinect segmented and colored pointcloud real time”. I am working pcl segmentation for my project. I liked your project so much, you are very succesfull on this. Which PCL version can you build it? Can you send me Cmakelists and all source codes? How Can I run these? What is the system requirement (PCL 1.6 etc.) ? I am working on MSVC 2010 x64 Win7.
e-mail adress : firstname.lastname@example.org
Please help me,
Great job for all projects you’ve done. I’m a master student doing research on robot grabbing. I’m currently trying to find the touch point between finger and object. The project you built ” PCL object and Hand Tracking” is really awesome. Could you please share your source code? If not, could you please let me know if you’ve used Kinect skeleton to detect palm and then using 2d operations to get the contour of fingers?
my email is email@example.com
Thank you for your time in advance.
Hello, Goutham… I listened to your explanation of Viola- Jones face detection and tracking on youtube. Can you please email me the ppt as soon as possible. My email id is firstname.lastname@example.org
hello Mr Ramsi
i have see your video witch explain viola and Jones detection face , but i still not understand witch features we use in each stage If you help me I should be grateful to you
hie.. i want to capture video using two web cams so i get figure out x,y,z axis. I m try to work on live video stream.. could you please help…
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
You are commenting using your Google+ account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.
Notify me of new posts via email.