Notice: MediaWiki has been updated. Report any rough edges to marcan@marcan.st

Talk:Main Page: Difference between revisions

From OpenKinect
Jump to navigationJump to search
 
Line 22: Line 22:


Thanks
Thanks
Hi all!
Please, please note somewhere at the top of the main page that people with XBox One Kinect should go to the libfreenect2 repository. I have hit my head against that wall for some time...
Cheers,
Thomas


== Interesting survey ==
== Interesting survey ==

Latest revision as of 02:25, 22 October 2016

Documentation

Hi team,

This is good stuff, really looking forward to this stuff taking off. I've got a couple of thoughts I've been rolling around in my mind since this Kinect sensor was first introduced (Project Natal)

1. Second Life, The Sims or any 3 D virtual reality use. 2. Robotics, controlling a humanoid style robot similar to Asimo with the sensor to mimic the human's actions.

I am not a developer but do have my hands in IT and really enjoy technical documentation. Is there any help required in the documentation arena?

Cheers,

Jay


Hey guys,

I have an interesting survey related to 3D camera technique and software developments. It would be very great, if you could fill it in. Will just take 3 min. http://edu.surveygizmo.com/s3/507735/Create-your-own-apps-and-make-money

Thanks


Hi all!

Please, please note somewhere at the top of the main page that people with XBox One Kinect should go to the libfreenect2 repository. I have hit my head against that wall for some time...

Cheers, Thomas

Interesting survey

Hi guys,

I've purchased a new Xbox 360 Slim (!) Bundle with the MS Kinect. This new Kinect has got a so called AUX-port and no USB port anymore.

Do you know any possibility of connecting the new kinect (AUX) with your PC?? Is there any adapter available?

Thanks for your feedback!

KR

Read Getting Started --JoshB 20:30, 16 December 2010 (CET)

Those who bought Kinect in slim bundle will need the special adapter sold separately by MS to connect it to PC, but it costs $50, they don't ship to all countries and it is sold out anyway... Here, on this forum few guys made it at home for few bucks: http://forums.xbox-scene.com/index.php?showtopic=723561&st=15

Great Job

Great job on the wiki guys! The new logo looks great!

Angelo Castigliola http://www.castigliola.com/

Idea...

Has anyone experimented yet with using multiple Kinects simultaneously for a combined volume capture?

Might have some useful applications for Film / Visual Effects. If you had multiple cameras and the means to calibrate them so they're aligned for the same volume you could fill in most of the empty spots. Could be great for creating crowd simulations, for example.

Possible answer? Not sure if this is what you are looking for? http://kinect-hacks.net/kinect-hacks/2-kinects-1-box

PROBLEM : Kinect For Windows on OSX

I'm kinda new with a coding stuff. So, I do it step by step follow as your installation. I choosed to use Homebrew to install the library but I still can't get it work with glview. Can you provide a How-to get the kinect for windows work step-by-step ? Such as, a main requirement to get it work. I'm searched for all about kinect for windows on osx. But the result is over-load of the informations, Kinda make me confused and lost. Now, I don't know what I should do oir start with. (Sorry, If my question sound stupids but I really want to try to make this work with your help.) Any suggestions would be appreciate.

Thank you.

I'm sonic, I also met this kind of problem. I have two kinects. The 1st one is Kinect for Windows(Model:1517). The 2nd one is Kinect for XBox360(Model:1414). The Model:1414 works well under OpenKinect, but the "glview" with Model:1517 responds: do not find device. My OS: Ubuntu12.04 LTS

Gesture recognition in libfreenect2

Hi everybody,

I feel the need to congratulate everybody that contributed to this project as it is truly amazing! A few hours after getting my hands on a kinect 1520 i was able to create a simple paint application in Processing3 by slightly modifying the DepthPointCloud2 example.

In order to only track my hand so that I am able to draw its movements, I set a lower and upper bound threshold in depth camera values to create a very narrow band of values that are 'tracked' and 'painted' in a second image. However this is a very simplistic approach. From what I have read, libfreenect2 does not support gesture recognition so my question is the following:

- How would one approach the problem of writing code to support gesture recognition? - More specifically, is it possible to do this in Processing or should it be done at a lower level and just input the results in Processing? And what would that lower level be exactly?

I have some experience with ML algorithms but I have only implemented them on 'real-world' data (such as transactional data) found in .csv or .txt files and the programs were written in R, SAS, Matlab etc. So I have no idea how to go about it for real-time sensor data.

I am running this on OSX with a kinect v2 so using OpenNI or Microsofts SDK is not (yet) possible plus the point of this is to learn how to do it rather than the result. I understand that it might be too ambitious however I do know people who come from a AI background and others from CS who could help but I need to understand where such algorithms would be implemented. Again the point is more to understand the nature of these problems and how one would go about solving them rather than the actual result.

I hope all this makes sense.

Thanks