LJMU Research 5: Photoscanning a Human Subject

General / 08 January 2021


I've grown quite fond of photogrammetry as a workflow. I feel that at present I am just scratching the surface of what it can be used for. Currently, I have only really tackled non-living subjects which makes my job easier as a photogrammeter, however, I would like to push this even further and tackle a living subject. Luckily I happen to share a flat with a living subject. I would like to introduce you to my life partner, Amie Woodroffe.

photo of her I took on a photography course

In my preliminary research, I found that human beings typically work best in a setup like this



courtesy https://blog.twindom.com/
The benefit of this setup is the time between each camera position is instant. the model can pose in a more extreme or exaggerated manner without worrying about minor shuffles between shots. 

This would be the ideal situation for me but the blog I found this image on suggests this would cost tens of thousands of dollars, and as I mentioned in my last post we're doing this thing guerilla-style. such luxuries remain in the hands of the real pros. my goal here really is to see how far a man-and-a-camera can take this. 

In my last post, I also commented on how various factors may affect the outcome of a photo scan. for the sake of my future self, here are the main conditions that may have affected this shoot

Camera: Nikon D3500

Time of day: 3:30 pm

Season: Winter

Cloud coverage: Overcast

Other weather conditions: Light drizzle


I predict the fact that it was overcast may have a positive effect on the outcome. Diffused light seems to be much better as it provides minimal lighting data, with no strong shadow. this is good as it allows you to light the model how you please after the fact, and doesn't result in conflicting lighting. 

Upon completing the shoot I would like to note that for the sake of the model, try to avoid weather types that make them uncomfortable. not only for their own sake but discomfort tends to make people shuffle around which may ruin the reconstruction. . Again this is where a fancy studio would be better.

here are the photos used for the photo scan, out of these only 2 turned out to be unusable which is a good outcome in my book.

Another disadvantage to using outdoors as a setting is how the software often confuses background points for foreground ones, contrary to this however is that it also can be an advantage in reconstructing a scene as there is more data for the software to cross-examine between shots. I have seen examples of people doing head scans against a greenscreen so this may be an avenue for exploration.
I also found this article including methods for just a head scan and would also like to investigate the techniques outlined. 

https://adamspring.co.uk/single-post/2017/08/30/Single-Camera-Head-Scanning-Photogrammetry


over in 3DFZephyr now and this areal shot is the outcome of the sparse point cloud. The software has done a great job reconstructing the entire scene.
circled in red we can see the bit we are actually hoping to use, so this is a note on the disadvantages of outdoor shooting. the majority of the data produced is actually background, so I snip the scene down to just the circle containing Ms. Woodroffe.


with some snipping, this is the result we are left with the data we are left with is quite a small percentage of the overall scene.


3DFZephyr has reconstruction presets that I will try to explore, here we can see there is a preset for the human body, 

In my experimentation, I found that the best approach with these presets is to start small and generate over multiple passes to make sure the best result is produced

in my attempts, I came across a few failures which can often be quite funny, however frustrating.

the above reconstruction did a great job on the face but her body absorbs the floor texture and the reconstruction produced 2 right arms. 

The opposite is true for this one. the body came out great but the head has collapsed here she is again from a different angle

This is certainly a game of trial and error. There are definitely some notable issues with my photographs used, as I seem to have missed a vital angle between her left profile and her left 3/4 view. this sort of thing gives the software a hard time raveling up. I also think there is a natural shift in a person's stance that is unavoidable. even trees move in the wind so minor movements are bound to happen to the body. I believe post-adjustments will as be necessary for almost all uses of this method on a living subject.



 after many attempts, I came to this

In this version, there are still many issues with the mesh. the aforementioned missed link between her profile and 3/4 view resulted in some caved in geometry around the left cheek. 

the texture produced was also very ghostly and didn't hold detail very well. 

similar caving in was found at the back of the model. a notable ripple existed throughout the


for my previous studies in this technique, this would have been the end of the process. It felt as though I was defeated. "this is why the pros have those fancy studios" I thought to myself


and then I remembered that Blender had a sculpt feature. 

"Maybe we should start manually addressing some of the more gleaming issues and it will look a little better." I thought.

So bouncing between Blender's sculpting tools and a software called Substance Painter I gradually fixed the issues and artefacts that the computer hadn't caught.




Blender's sculpt mode allows you to fix issues with the geometry with a more intuitive sculptors kit, this works much better than traditional polygon manipulation as the generated topology is pretty chaotic.



Substance Painter is a great tool for PBR texturing. For those not aware, PBR texturing is a method used in-game engines, among other things, to output real-time renders. it's a sequence of textures that allow you to describe things like surface texture and whether a thing is made of metal or skin. this allows for relatively processor cheap refinement on the model and material information that helps sell the overall look.

 I took the time to paint her necklaces and buttons with a metallic texture. I also used a Normal Map to add different sculptural textures to the clothing, such as a grainy texture on the fur that sells the illusion much better.

Another useful tool within Substance Painter is the Projection tool, which I used to project photographic data from the shoot onto the base colour layer. 

this was most useful in keeping the likeness consistent, often fixing the model with Blender's sculpt tool can ruin the likeness, so I had both software running and bounced between them. The following shows the final output, the first is a Blender's Cycles render, the second is a real-time interactive view courtesy of SketchFab.


On reflection it's hard to say how much of the final result is truly and purely photogrammetry at this point, however, it makes for a great foundation to create studies such as these. 

I would approach photogrammetry more loosely in the future, it's more about the final product than how it looks starting out. there are messed-up examples in my failures, that upon reflection may have been ok starting points had I known how much manipulation can be done after the fact. 

My final note is how I have been thinking about this in terms of practicality. I've questioned the how quite in-depth and I think some thought must be put into the why.
A definite avenue of exploration is this Smithsonian collection a coursemate found and put in the group chat.
https://3d.si.edu/cc0?utm_source=siedu&utm_medium=referral&utm_campaign=oa

I wasn't sure if the models were photogrammetry or not. I dug into the blogs of the Smithsonian Digitization Process Office and found out this post confirming the use of photogrammetry in the scanning of a Bell X-1 Aeroplane:

https://dpo.si.edu/blog/bell-x-1-3d

"We used laser scanners for geometry capture and Photogrammetry to capture the color information of the Bell X-1. With Photogrammetry, we are able to turn our digital cameras into 3D scanners using post-processing software. The laser scanners capture over 1 million data points per second. The data we collected on the Bell X-1 is accurate to about one millimeter. " - Vincent Rossi

Courtesty of https://dpo.si.edu/blog/

This proves to me that the techniques I have been exploring have legs in the process of archiving museum collections, and the idea of combining this with laser scanning is something I'd really like to explore. 

This practitioner also details some of the issues he overcame later in the post

"Certain materials on the Bell X-1 did not scan well and presented challenges for the scanning team. We had difficulty with two types of surfaces on the aircraftthe glass windshield and the painted blue areas around the stars on the wings. Glass does not scan well because the laser mostly passes right through it. Luckily, we were able to get enough points on the glass surface to accurately reconstruct the windshield using CAD (Computer-Aided Design) software. " - Vincent Rossi

This second quote reminds me of a lot of the problems I overcame in my own tests, and how oftentimes the resulting mesh must be fixed manually. It shows that no matter how fancy your setup is, it's always going to need some work afterwards.

References:
https://sketchfab.com/

 https://www.substance3d.com/

https://adamspring.co.uk/single-post/2017/08/30/Single-Camera-Head-Scanning-Photogrammetry

https://blog.twindom.com/blog/overview-dslr-photogrammetry-3d-full-body-scanner

https://3d.si.edu/cc0?utm_source=siedu&utm_medium=referral&utm_campaign=oa