Loading
Hey! This thing is still a Work in Progress. Files, instructions, and other stuff might change!

gigapixel imageBot

by ordaos, published

gigapixel imageBot by ordaos Jun 12, 2011

Featured Thing!

Description

This thing is a set of tools to help adapt a makerbot or other 3-axis CNC machine into a gigapixel imageBot. This is an extension of the Explorable Microscopy project. http://www.explorablemicroscopy.org/

Some example images: 720 megapixel Arduino http://gigapan.org/gigapans/77742/ 200 megapixel Crab shell http://gigapan.org/gigapans/78874/ 60 megapixel fly http://gigapan.org/gigapans/79492/

The goal is to make the generation of very very large, explorable images, as simple as possible, bringing this technique to more people. An important aspect of this goal is to make as few modifications to the equipment as possible.

The idea here is to capture thousands of images of a single subject in a 3D array, then merge these images into a single explorable image which can be viewed online using the free gigapixel image sharing site, http://www.gigapan.org . The process is broken up into four steps:

• Image capture - automatic capture of thousands of images in a 3D array. • Focus stacking - merging images of different focus into a single all in focus image. • Image stitching - merging adjacent images in an xy plane into a single large image. • Upload/sharing - uploading to the gigapan website for sharing and exploring.

This thing specifically addresses the first of these four steps, image capture. A video of the thing in progress can be found here: http://www.youtube.com/watch?v=jKSs6J-xn18

This thing was greatly influenced by Rich Gibson who had the initial impetus to work on it, so much so, that he bought a makerbot that he then let me play with.

Recent Comments

view all
brilliant!
Fucking awesome!!

Liked By

view all

License

Give a Shout Out

If you print this Thing and display it in public proudly give attribution by printing and displaying this tag. Print Thing Tag

Instructions

Setup your imageBot:

1) Mount a camera to you CNC machine. Obviously for small working spaces a camera with appropriate magnification for the subject is important. I'm using a DSLR, canon T2i with the stock lens and a ~$10 reversing ring to gain magnification. A proper macro lens may work better but for ~$10 I'm getting great results. Newer point and shots may be able to achieve a reasonable level of magnification in macro mode for many subjects. Unfortunately I haven't made a general purpose camera mount. Mine was made from a bit of t-slot extrusion and a few screws, the camera mount screw is 1/4" by 20 thread count. Each camera will be a little different so be creative.

2) Make and connect electronic trigger. On DSLRs this can be done through the electronic shutter, an example of how to wire the shutter for some DSLRs can be found here (http://martybugs.net/blog/blog.cgi/gear/CanonN3Connector.html). I simply short the inner most and outer most pins on the stereo plug using a relay. For a cheaper solution one can use CHDK enabled Canon point and shoot cameras and trigger them via the USB, check here (http://chdk.wikia.com/wiki/USB_Remote_Cable) for directions. The electronic trigger then needs to be connected to your hardware. In the case of the makerbot I've got it triggering through the fan port on the extruder board. The python code that generates the G-code for image capture inserts the take picture command, which in this case is fan on/fan off, M106/M107, although these can be changed via the command line options.

3) Add lights. Pretty straightforward, good pictures require good lighting. Be creative. I wired up a couple of high power LEDs to the makerbot power supply.

Taking pictures:

1) Determine the field-of-view and depth-of-field. In order to figure out how many images you need to take to cover a subject you must first determine the field-of-view and depth-of-field of the lens you're using. The field of view is easy, either place a ruler under the camera and count, or using the manual controls, advance the stage in the x direction until an object passes from one side of the image to the limit of the other side. The depth-of-field is a little more tricky to determine precisely. You can estimate it by again using the control panel in replicatorG to advance the z stage, seeing a plane comes into focus and when it falls out of focus. A slightly more rigorous method is to measure it directly using a ruler at 45 degrees, the measured width in focus should then be equal to the depth.

2) Generate your array using grid.py For all the details on using grid.py type "./gird.py --help". You must first determine the bound of your subject, this is easily done by using the replicatorG control panel to jog in x,y, and z. Set home to the bottom, lower, left corner and work your way to the top, upper, right corner (all units should be positive). Once found these values are used to generate the array, a typical example is:

./grid.py --x=40 --y=13 --z=3 --depth-of-field=0.15 --z-overlap=0 --picture-delay=1500 > crab03.gcode

This sets our x to 40mm wide, y to 13mm high, and z to 3mm deep. The depth of field for this lens was found to be around ~0.15mm. There is no z-overlap, the picture delay is 2*1.5 seconds and the output is being saved to crab03.gcode.

3) Start image capture. Home your bot to the 0,0,0 you chose when defining the subject space. Open the gcode produced by grid.py and hit "buld".

Post image capture, stacking, stitching and uploading:

1) To aid in image processing it's convenient to group each focus stack into a single folder. The python code zsort.py can help with this.

zsort.py sorts sequentially shot and numbered images into subdirectories for batch focus stacking.

Usage: ./zsort.py PREFIX n1 n2 n3

where PREFIX is the image name prefix like "IMG" for canon cameras, n1 is the number of images per stack, n2 is the total number of images and n3 is the trim factor. Call zsort from the folder containing the image set to be sorted. If the trim factor is zero it will copy all the images for each stack. If it's some integer larger than zero it will select the top n3 largest images and copy them over. This is a dirty way of speeding up the focus stacking for compressed images, where the file size directly relates to the amount of "information" in the image. Larger images tend to be more in focus than smaller ones. This only works well for relatively flat subjects.

2) Focus stacking. There a a few focus stacking software packages available, including some free and open source versions, particularly those written for imageJ. For these test however, I've used Zerene stacker (http://www.zerenesystems.com/) because of it's good quality and ease in batch processing.

3) Image stitching. Again, there are many options for image mosaic stitching including free versions such as ptGui, although due to it's ease, I'm using the gigapan stitcher.

4) Uploading. gigapan.org offers a completely free service for uploading, viewing, sharing, and annotating gigapixel scale images. The only requirement is that the minimum image size exceed 50 megapixels :)

Fucking awesome!!
Very cool use for a MakerBot!!!!
Top