Hey! This thing is still a Work in Progress. Files, instructions, and other stuff might change!

Kinect to STL sketch for Processing

by johngomm, published

Kinect to STL sketch for Processing by johngomm Oct 9, 2011
0 Share
Download All Files

Thing Apps Enabled

Order This Printed View All Apps


Liked By

View All

Give a Shout Out

If you print this Thing and display it in public proudly give attribution by printing and displaying this tag.

Print Thing Tag

Thing Statistics

11378Views 2693Downloads


Here's my Processing sketch to interface with the Kinect and capture the depth data and render it as a solid STL file. I've included controls to adjust two thresholds - near and far. This allows you to set up a "Han Solo in carbonite" type effect.

I am not going to hold your hand through setting up Processing and this write up is In Progress, so if you get frustrated, realize that this might not be for you yet. Until I find a way to streamline posting a standalone application that works (currently it doesn't), this is still only for the persistent.

Until Microsoft publishes the code for their KinectFusion project, this is the best I could do to get a directly printable object without messing around in Blender or MeshLab. It's also my first serious coding effort, so forgive any inelegant code. Yes, the STL files are large (15mb) and the detail is hard for the CupCake to print, but it has the outreach potential for people new to 3D printers to create a unique, personalized object by just posing. Before you ask, the STL doesn't seem to take any less time in skeinforge if you use Blender to remove all duplicate vertices first, so I don't bother.


You will need a Kinect attached to your computer.
I haven't yet exported the processing sketch into a more stand-alone version, so you'll have to deal with running the source code, but that means you can change it and improve it.
This sketch runs inside the Processing environment, which you can download here: http://processing.org/
Then you'll need the libraries my script is dependent on:
ToxicLibs: http://hg.postspectacular.com/toxiclibs/downloads/toxiclibs-complete-0020.zip
PeasyCam: http://mrfeinberg.com/peasycam/peasycam_101.zip
Freenect Library: https://github.com/diwi/dLibs/archives/dLibs

And also install the OpenKinect drivers to let your computer talk to your Kinect. You'll need to choose the right option for your operating system: http://openkinect.org/wiki/Main_Page

Once you have it all set up (yes I know it's a bit of a chore, sorry) run the sketch and use "r" and "f" to adjust the red (far) threshold and "g" and "b" to adjust the green (near) threshold. When you are happy, strike a pose and press "s" to output the STL. This can take a while, depending on your computer's speed, but shouldn't take longer than 5 minutes for the highest resolution setting on a netbook and should be way faster on almost anything else.

All Apps

3D Print your file with 3D Hubs, the world’s largest online marketplace for 3D printing services.

App Info Launch App

Auto-magically prepare your 3D models for 3D printing. A cloud based 3D models Preparing and Healing solution for 3D Printing, MakePrintable provides features for model repairing, wall thickness...

App Info Launch App

Kiri:Moto is an integrated cloud-based slicer and tool-path generator for 3D Printing, CAM / CNC and Laser cutting. *** 3D printing mode provides model slicing and GCode output using built-in...

App Info Launch App
KiriMoto Thing App

With 3D Slash, you can edit 3d models like a stonecutter. A unique interface: as fun as a building game! The perfect tool for non-designers and children to create in 3D.

App Info Launch App

Print through a distributed network of 3D printing enthusiasts from across the US, at a fraction of the cost of the competitors. We want to change the world for the better through technology, an...

App Info Launch App

Quickly Scale, Mirror or Cut your 3D Models

App Info Launch App

3D Print a wide range of designs with Treatstock. Easy to use tools to get the perfect result. The global 3D printing network that connects you with high-quality and fast working print services nea...

App Info Launch App

The Linux-x32 package does not work. Processing states:

requires openkinect driver (libfreenect.dll)
tested on windows XP, x86
tested with libfreenect: "OpenKinect-libfreenect-3b0f416"
libusb-win32 version


location: dLibs.freenect.FreenectLibrary.loadLibrary(Unknown Source)
message: Unable to load library : freenect.dll

Not sure why a linux package would require a windows dll, but even after supplying the dll the error remains unchanged.

hello do i need the openkinect drivers if i am running processing for windows and dlibs also theres does'nt seem to be an openkinect driver for windows .what do i do?


I was running this on windows7 starter, so there should be a driver for windows at the site I pointed to. I tried making a zip that contained all the files needed to install on another machine, but I couldn't get it working (didn't spend too much time on it though). To be honest, I'm not much of a programmer and the whole thing is quite a kludge. I really wish Microsoft would release the code for KinectFusion so we could all get models from that system.


short question:
anytime i try to run the patch i get this message:


location:   dLibs.freenect.KinectCore.setDepthBuffer(Unknown Source)
on device:  0
message:    FAILED: set depth buffer
message:    no device opened


location:   dLibs.freenect.KinectCore

.startDepth(Unknown Source)
on device: 0
message: FAILED: start depth
message: no device opened

any thoughts on why this happens?

i have everything installed. and i use a mid2010 macbook pro with lion

thanks a lot!

The problem is that the dLibs_freenect library currently only works for Windows. This means that this program is Windows-only for the moment as far as I can tell.

Please can you contact me brcjackson at gmail dot com

Need some help with this.

kinectfusion runs the 3D scan data through a SLAM (simultaneous localisation and mapping) algorithm with some kalman filtering. The algorithms involved are well known and widely available, just need to be implemented with openkinect.

In the meantime, check out http://youtube.com/activevisionhttp://youtube.com/activevisio... for a group doing monoslam with only a single standard video camera. A thorough kinect SLAM algorithm could include the video data as well as the 3D map for even more precise scans as well as importing textures onto the 3D model :)