What the Tech…?!
A recent SHA Academic and Professional Training Student Sub-committee survey asked student members what technologies…
Virginia Commonwealth University (VCU) was awarded Department of Defense (DoD) Legacy funding for a three-dimensional (3D) artifact scanning project in 2011, which was developed in partnership with John Haynes, then archaeologist for Marine Corps Base Quantico (MCBQ). The DoD Legacy program is designed to foster innovative approaches to the study, preservation, and stewardship of cultural remains—including archaeological objects—recovered on DoD facilities across the globe.
Our project involves 3D scanning of archaeological objects using a NextEngine Desktop 3D scanner in order to test and demonstrate the capabilities of this technology for its potential employment in ensuring DoD compliance with historic preservation laws. Archaeological collections from DoD installations in Virginia, Maryland, and other regional repositories are the subject of the study. The Virtual Curation Unit for Recording Archaeological Materials Systematically (V.C.U.-R.A.M.S) consists of faculty member Dr. Bernard K. Means and several undergraduate students enrolled at VCU.
Virtual artifact curation has the potential for addressing a number of issues important to archaeologists. One issue is access to collections. The virtual curation project will enable researchers to access digital data files that allow full 3D observation and manipulation of an image and accurate measurement without requiring scholars to travel to a repository. Digital scanning of objects can save time for both researchers and for staff at curation facilities, while maximizing scholars’ access to collections. Objects and entire collections that are now physically dispersed in more than one repository can be united through 3D digital scanning into a single virtual repository.
The NextEngine Desktop 3D scanner is designed to be portable and, as part of the Virtual Artifact Curation project, the potentials and capabilities of the scanner have been tested at several non-lab locations. We can go to places that are culturally and historically important to our country, scan objects at these locations, and make them accessible to a wider audience. We have been fortunate to scan archaeological materials from Virginia institutions such as Colonial Williamsburg, Jamestown Rediscovery, George Washington’s Ferry Farm, and Flowerdew Hundred, and at The State Museum of Pennsylvania in Harrisburg, Pennsylvania. Archaeological materials from these significant locations are certainly too fragile to be passed around among scholars and in classroom settings, but can be shared digitally.
With 3D scanning technology, important cultural items that belong to and must be returned to private landowners could be recorded and made available to scholars through virtual curation. While owners of archaeological collections in private hands may not be willing to donate the physical objects located on their properties—perhaps identified through a compliance investigation—they may agree to “donate” the information inherent in their collections and make their items virtually accessible to a wider audience of scholars and others who might be interested. Virtual curation may also prove useful for cultural objects that are designated for eventual repatriation, if descendent groups agree to the scans of these items.
Virtual curation of artifacts will prove critical for fragile objects by minimizing handling and “preserving” them digitally, especially when conservation funding is limited. Repeated digital scanning sessions can help conservators ascertain whether conservation treatments are working as intended—through highly accurate digital models taken of the same object at set intervals. This will enable the conservator to closely monitor whether there is continuing degradation of an object.
While digital scanning is an important tool for documenting the potential degradation of an object, the initial stages should precede any conservation treatments when possible. If an object is scanned prior to conservation treatments, a pretreatment scan of the object may be the “truest” image of the object that we will ever have. Conservation does not always produce an object, however stable, that represents its original state.
Sharing of data is certainly one of the strong points of the movement toward digital archaeological media. The ability to manipulate and move objects in three dimensions benefits researchers more greatly than static images ever can. Public and scholarly interaction with digital models can certainly foster a more reflexive archaeology. This would allow diverse observers to move virtual objects or travel through virtual worlds, creating a dialectical relationship between past and present—and, open interpretation and reflection up to a wider audience.
Where do we go from here? How will 3D digital images of objects and artifacts alter people’s perceptions of what is “real” and what is “virtual”? This is something we plan to explore in greater detail in the coming months. Our project team maintains our own blog that regularly details and updates our progress with the scanning project: http://vcuarchaeology3d.wordpress.com. Here, you can find more information about our successes and challenges with the virtual curation of artifacts from historic and prehistoric sites. We welcome your comments as well.
This is a fantastic project- thank you for sharing! There are also cheaper (but less rigorous, I suppose) means of 3d scanning. I’ve got students in one of my classes working with 123D Catch from Autodesk http://www.123dapp.com/catch with artefacts from the Canadian Museum of Civilization – see http://www.youtube.com/watch?v=eCS21yHz7Qg&feature=youtu.be for one such. The student used a $400 digital camera and around 70 photographs to produce that object (which can be exported into Sketchup or Meshlab etc; has correct volumes and so on) . For public archaeology, this is a really nice way of going about things – one could involve the public itself in the creation of models. We’ll be presenting about all this at the Canadian Archaeology Conference in May, in Montreal.
Hi Shawn. Yes, that definitely works. The difference we have with our technology is that we can create digital topological models that can be measured even via an Adobe Acrobat PDF file. Also, we can “print” our models. We are eagerly awaiting a resin printer that will allow us to do this. One drawback of our approach but not yours is that our scanner does not do well with replicating the colors of an object. Thanks for the links!
Hi! I’ve managed to embed some of our 123D catch models into PDF, but my brain starts to hurt as I work through LaTex manuals… ugh. As for 3d printing – the business model for 123D Catch appears to be, ‘use this freebie to create your 3d models, and then pay for them to be printed out via our bespoke 3d printing service, http://www.123dapp.com/makeit/about . But there doesn’t seem to be any reason why the .obj files created by the Catch can’t be brought into standard 3d printing like a Makerbot. Someday, I hope to be doing this, and to be able to see just what level of fidelity is actually possible this way. I’ve seen people do interesting things with Microsoft Kinects in this regard, too.
Shawn, that’s interesting and good to know. We are getting a Makerbot and could potentially test using one of your .obj files to see how that works. Should be here in the next month or so. (I think they use a Makerbot to make other Makerbots, which is a bit recursive from my perspective 🙂 ….). I’d be interested in knowing how long it takes to create one model. It would also be interesting to try both approaches on the same object and compare resolution…. the NextEngine scanner is not without its issues!
Very interesting stuff Bernard. So what do you think the future of this
technology is? With museums and curation spaces filling up, and it becoming
more costly to curate artifacts, do you see virtual scanning as a possible
solution to indefinite curation or do you think its strength lies in providing
researchers with data and the public with information that they would not
normally have access to?
P.S. Nice blog, that pipe is awesome
Hi Jonathan! Right now the immediate future does not seem to be that this is a substitute for curation, as it simply takes too long to process an individual archaeology. The strength really is in sharing data and providing the public and researchers with access to specific artifacts. I think also this technology will be useful for creating virtual type collections. Certainly the few animated images we’ve posted so far have generated a fair amount of interested. This technology is also great for pedagogical reasons–much easier to show aspects of an object to students than trying to pass around artifacts!
So is there anyone out there that has started to make a virtual type collection or assembled a database where you can view scans? Is that somewhat of the end game with the DOD? It’s interesting to think about the type of research that could be done utilizing a large data base of scanned images (such as pipe morphology across a region).
Needless to say, I love what your doing
Jonathan
Hi Jonathan,
We are beginning to assemble a virtual type collection as part of our project and are looking to do a follow up that would allow us to expand. We still have to figure out how we are going to host things and also have permissions to work out. Except for VCU materials!
Cheers,
Bernard
Bernard, Great project, especially around virtually curating objects in private hands. We’re working here on the other end of that challenge – making vast CRM collections accessible – and have just started a repository for CRM generated materials from the province of Ontario. This summer we will be running an Archaeology Digital animation unit to develop the protocols to, in effect, assembly line generate 3D scans of diagnostic materials. Accessibility is the key aim, for archaeologists, Descendant communities and the public, and to facilitate remote research on that compiled record. We have been fortunate to have received a large grant and will be populating the lab with several higher end color scanners as well as more portable ones (RFP for these being written now, so we’ll see what the successful equipment will be!).
Something we’ve been talking about is post scan rendering, which can be a lot of work. Did you clean up your object scans in the Nextengine supplied software or did you export it another software program? Also, have you had any luck scanning sharp edges? For especially lithic materials, it can be hard to get enough points of a thin edge to have something that is analyzable virtually.
Cheers,
Neal
Neal,
Sounds like a great project and I’d like to talk to you more about it. Are you doing SAAs? We will be producing a report on our findings–the good, the bad, and the ugly–which will be publicly accessible and might help with some of your issues. I’d love to reference your project in the report as well, so if you have any details available now and can share, feel free to shoot me an email at bkmeans@vcu:twitter.edu.
We do the initial cleanup in the NextEngine software but am looking into getting Photoshop 3D to do more cleanup. Our budget is a bit on the modest side right now.
As for points, to get a successful scan, especially of thin edges, we are powder coating the edges with “White Graphite” powder (boron nitride) which helps pick up the edges. One of the issues we have, however, is that aligning scans can blur the edges–something you need to do if you want a digital topological model that captures the edges, base, and tip of a point. The NextEngine software is a bit “indelicate” on this issue. The solution we have come up with is to keep in separate files the “edge” and “base/tip” scans for information purposes, and separate file with these attributes merged. The latter is necessary to create models than can be manipulated in 3D.
Hope this helps and good luck on your project.
Bernard