Loading

Hey! This thing is still a Work in Progress. Files, instructions, and other stuff might change!

Mount Rushmore from Internet Images & Free Software

by PrintableScience, published

Mount Rushmore from Internet Images & Free Software by PrintableScience Aug 2, 2015

Challenge Winner

5 Share
Download All Files

Thing Apps Enabled

Order This Printed View All Apps

Contents

Use This Project

Give a Shout Out

If you print this Thing and display it in public proudly give attribution by printing and displaying this tag.

Print Thing Tag

Thing Statistics

7239Views 1860Downloads Found in Scans & Replicas

Summary

About 15 years ago, a company called MetaCreations Corp released a product called Canoma. In the current context its functionality was pretty limited, but at the time it blew me away. It allowed you to input a photograph and then with a fairly straightforward interface, allowed you to bend and rotate the image in 3d.

When I saw the See the World contest, I knew I wanted to respond to the challenge by using existing software to try and create a 3d object of a well known sculpture, building or landmark. After a few false starts (most notably the Statue of Liberty), I lucked onto Mount Rushmore. For such a famous landmark, I’m surprised there’s no 3d models of it to speak of.

While researching the current availability of software, I came across VisualSFM, which seemed to be exactly what I was looking for, as it claimed it could create a point cloud from non-sequential images. Most programs that this very important first step in 3d modeling — creating a point cloud -- want or expect the images to have been taken pretty much at the same time, and often with the same camera.

So I spent a few leisurely hours downloading images of Mount Rushmore (well more than just a few, and not that leisurely), and then spent more time than I care to admit learning all about how to prepare my image set for processing by VisualSFM and the other programs in the tool chain.

I haven’t personally been to Mount Rushmore, but I’ve spent so much time pouring over pictures that I think I could reasonably expect to get a job as a guide. I think I know every nook and cranny the monument has.

Now that the world has an STL file of Mount Rushmore, I’m sure it’ll be no time at all before all sorts of monuments start popping up with one or more the the presidents’ faces replaced by someone else’s image… say Stephen Colbert for example?

While googling about the net for everything and anything Mount Rushmore I came across the following quote from a Makerbot blog in 2011 (http://www.makerbot.com/blog/tag/meshlab):
“Using My3DScanner Tony uploaded 30 pictures from his camera phone to create the above gnome clone.  Awesome!nWho is going to be the first person to create a 3D image of Mount Rushmore using this system?”
Well I didn’t use the service talked about in the blog, but I think I can claim to be the first person to create a 3D image of Mount Rushmore using only images from the net.

In addition to the STL file mountRushmore.stl, I’ve also included the following files:

  1. mountRushmoreDense.nvm
    This is the sparse reconstruction file created by VisualSFM that you load into MeshLAB as a project.

  2. mountRushmoreDense.0.ply
    This is the dense reconstruction file that you would import into meshLAB.

Loading files 1 and 2 into meshLAB will give you the opportunity to play around with the point cloud and create your own meshes without the time and frustration of VisualSFM.

  1. mountRushmore4Blender.ply
    This is the mesh I created with MeshLAB and cleaned up to the best of my ability for importing into Blender for final processing.

  2. mountRushmore.blend
    This is a Blender file, with the simple objects I assembled and placed appropriately before the final construction, i.e. mesh union, intersection and difference.

I would have like to included the complete image set, but at over two and half gigabytes, I’m sure that wouldn’t make me very popular at Thingiverse.

It was my original intention to provide a video and accurate step by step instructions on how to achieve the same results as I present in the STL file I’ve attached, but unfortunately I have to go out of town and won’t be back until after the See the World challenge closing date. So it will have to wait. Stay tuned, I’ll get back to it when I return.

Instructions

It’s easy to download pictures of buildings or monuments from the net and convert them into stl files that you can then use to print objects on your 3d printer, and with the use of easily obtained software, you can do it for free on your mac, windows or linux machine… as long as you’ve got a LOT of time on your hands.

Its a five step process.

  1. assemble an image set
  2. Create a point set for each image
  3. Compute a point cloud from all the point sets
  4. Create a mesh
  5. Clean up, edit and modify the mesh

There’s a free program called VisualSFM that will create a point set for each of your images and create a point cloud from your image set. You can then use the open source program MeshLAB to create the mesh from the point cloud produced by VisualSFM. Finally we use the open source program Blender to modify your mesh to create an STL file suitable for 3d printing.

I used Windows for the process, but all of the necessary tools and programs are available for the Mac and Linux as well. While the Mac is my machine of choice, and I used it for downloading my images and running meshLAB and

To start you’ll want to install the following software:

VisualSFM http://ccwu.me/vsfm/
VisualSFM is a program that is responsible for the creation of image point sets and the creation of a point cloud. It outputs two different point clouds, which it refers to as Sparse and Dense reconstruction. Although all we require is the Dense Reconstruction, the Sparse reconstruction is generated automatically.
If you are using Windows, you’ll need to select either the CUDA or non-CUDA version, which simply means that if you have a nVidia graphics processor you want the CUDA version and the non-CUDA version if you don’t have an nVidia graphics card.
VisualSFM creates its own sparse reconstruction but requires two other programs to create the dense reconstruction. You’ll see from the installation guide that you have to download two other programs (CMVS and PMVS) in order to enable VisualSFM to create dense reconstructions. The visualSFM site provides links to the program files for these two extra programs so that getting up and running is straightforward.

MeshLAB http://meshlab.sourceforge.net
MeshLAB is the open source program that we use to take the dense reconstruction point cloud produced by VisualSFM and use it to create our mesh. Although the process is quite straightforward, it makes the awkward visualSFM GUI seem well engineered and elegant in comparison.

Blender https://www.blender.org
Blender is another open source program and we use that to edit the mesh created by meshLAB and it has even a steeper learning curve than meshLAB, but if you persevere you’ll have your reward.

Once you’ve installed all the software you need, its time to assemble your image set and there’s not easier place to look than the net. To create Mount Rushmore i just typed it into Google, selected the “images” button and got more images of Mount Rushmore than I originally thought possible. However, you should use some discretion in what you download. Here are a few tips:

  1. Choose an object that has lots of contrast. For example, the sahara desert would be a poor choice.

  2. Choose a popular object. The more pictures you have taken from more than one perspective the better. Try to watch out for duplicates, but don’t be obsessed with it. Far better to have a duplicate or two, than discard the incredible value of having two pictures taken from even a slightly different vantage point.

  3. Remember the limitations of your printer. For example, the statue of liberty sounds like a good idea, but all those spikes in her crown are overhang accidents waiting to happen.

  4. If your object is popular, like Mount Rushmore or the Statue of Liberty, there’s going to be a lot of photoshopped variations. For instance, there’s thousands of Mount Rushmore pictures on the net that have one or more of the presidents’s heads swapped in with someone elses. Sorry, only the original will do.

  5. Avoid copies. Mount Rushmore suffers from a number of well intentioned copies that people have constructed to honor this famous monument, which can sometimes fool you when you’re looking for images of your object. However, that just won’t do when trying to create a 3d reconstruction, you have to have the real thing.

  6. Look for as many different perspectives as possible. For example, while there are thousands of pictures of the statue of liberty taken by people below the statue, there’s really only a handful of photos taken from above. That will result in a final image that has great detail of her chin, but not so great of the top or back of her head. It often helpful to change your search terms in order to obtain a different set of images from which to choose. For example, “mount rushmore aerial” will give you some great pictures from above taken by airplanes.

Once you’ve assembled your image set, you need to sort through them making sure they’re compatible with the next step in the process. Here are the steps to follow:

  1. Discard anything under 15K in size. They usually don’t have enough detail to lend any improvement to your final model.
  2. Convert any image you downloaded that’s not in the JPEG format to the JPEG format. VisualSFM only works with JPEG photographs. Fortunately all OS’s have a graphics program that can convert images easily.
  3. Resize any image that has a dimension greater than 3200 pixels. While the visualSFM config file can be changed to accept files with higher resolution, you’ll suffer such a processing hit that you may not live long enough to have it process your image set before you die. Literally.
  4. DO NOT CROP any image. Many of the pictures you download also contain technical information about the photograph, including the effective focal length of the camera when the picture was taken. Generating an accurate point cloud uses the focal length in order to determine the distance of the camera from the photograph being taken. If you crop a photo, it can confuse the point cloud generator and make the inclusion of a cropped photograph pointless and even confusing.

Once you’ve edited your image set and put them all in a folder, you can fire up VisualSFM and load in the photos. For Mount Rushmore this worked out to 1024 images, which took it about 90 minutes to process. But wait there’s more. After you’ve loaded in your image set, its time to compute the matches between the images in your set, and the processing time grows factorially. That’s because every image has to be compared with every other image. Do the math. :) If you’re only processing 10 images that’s:

3,628,800 comparisons.

If you’re processing 102 images that’s:

96144667150351266092686555869725954845535590505965946436944471404853171513025459060331496188236445138498559598036205915750371004286 comparisons

and if you’re processing 1,024 images as I was in my Mount Rushmore images set that’s:

541852879605885728307692194468385473800155396353801344448287027068321061207337660373314098413621458671907918845708980753931994165770187368260454133333721939108367528012764993769768292516937891165755680659663747947314518404886677672556125188694335251213677274521963430770133713205796248433128870088436171654690237518390452944732277808402932158722061853806162806063925435310822186848239287130261690914211362251144684713888587881629252104046295315949943900357882410243934315037444113890806181406210863953275235375885018598451582229599654558541242789130902486944298610923153307579131675745146436304024890820442907734561827369030502252796926553072967370990758747793127635104702469889667961462133026237158973227857814631807156427767644064591085076564783456324457736853810336981776080498707767046394272605341416779125697733374568037475186676265961665615884681450263337042522664141862157046825684773360944326737493676674915098953768112945831626643856479027816385730291542667725665642276826058264393884514911976419675509290208592713156362983290989441052732125187249527501314071676405516936190781821236701912295767363117054126589929916482008515781751955466910902838729232224509906388638147771255227782631322385756948819393658889908993670874516860653098411020299853816281564334981847105777839534742531499622103488807584513705769839763993103929665046046121166651345131149513657400869056334867859885025601787284982567787314407216524272262997319791568603629406624740101482697559533155736658800562921274680657285201570401940692285557800611429055755324549794008939849146812639860750085263298820224719585505344773711590656682821041417265040658600683844945104354998812886801316551551714673388323340851763819713591312372548673734783537316341517369387565212899726597964903241208727348690699802996369265070088758384854547542272771024255049902319275830918157448205196421072837204937293516175341957775422453152442280391372407717891661203061040255830055033886790052116025408740454620938384367637886658769912790922323717371343176067483352513629123362885893627132294183565884010418727869354439077085278288558308427090461075019007184933139915558212752392329879780649639075333845719173822840501869570463626600235265587502335595489311637509380219119860471335771652403999403296360245577257963673286654348957325740999710567131623272345766761937651408103999193633908286420510098577454524068106897392493138287362226257920000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 comparisons.

I don’t have the world’s speediest computer, but it takes about 36 hours for visualSFM to process the 1024 images in my current image set. After you’ve computed point matching between images, you’re still not finished. You also need to compute the sparse and dense reconstructions. While they don’t take quite as much time as point matching, you’re going to be waiting a few hours in between each of these steps.

Once you’ve gotten visualSFM to compute your dense reconstruction it will save it to a file, which you can then load into meshLab. Its relatively easy to create the mesh, but once again, it takes a fair amount of time. For my Mount Rushmore point cloud, it usually takes about 50 minutes to an hour.

I do a little bit of editing with the very primitive tools of meshLab and then save the mesh to a .ply file which I then loaded into Blender to do a bit more editing of the model, and then had it save the .stl file.

More from Scans & Replicas

view more

All Apps

3D Print your file with 3D Hubs, the world’s largest online marketplace for 3D printing services.

App Info Launch App

Auto-magically prepare your 3D models for 3D printing. A cloud based 3D models Preparing and Healing solution for 3D Printing, MakePrintable provides features for model repairing, wall thickness...

App Info Launch App

Kiri:Moto is an integrated cloud-based slicer and tool-path generator for 3D Printing, CAM / CNC and Laser cutting. *** 3D printing mode provides model slicing and GCode output using built-in...

App Info Launch App
KiriMoto Thing App

With 3D Slash, you can edit 3d models like a stonecutter. A unique interface: as fun as a building game! The perfect tool for non-designers and children to create in 3D.

App Info Launch App

Great model. Thank you for sharing. Our remix is done and it will be made soon: https://skfb.ly/XvCq

What a wonderful model! I did something in parallel but I did the sculptor's model in the studio at the base of Mount Rushmore. Like you, I relied on images culled from the Internet. Mine didn't turn out nearly so well since the majority of photos are usually taken from the same angles. The far edges, top and back are lacking but a viewer can get a sense of the three-dimensional nature of Borglum's creation. Thanks for sharing your model.

No, that's no my model in 123d... I wasn't interested in 123d because of the limited number of photos it processes and the limited control you have over the final mesh. I'm still just experimenting with the tool chain I use, but it appears to offer more flexibility and options and resolution than 123d.

Great model. I couldn't help but notice it looks exactly like a model form 123d gallery. Is that one yours too?

Top