Website powered by

LJMU Research 4: A more in-depth look at photogrammetry

General / 06 January 2021





I briefly touched on photogrammetry in a previous post. I find this method to be particularly interesting as it feels like the possibilities are endless. The whole world is basically an open-source asset library, in theory, anything can be digitized and used however you please. The method involves photographing a subject from many angles, running it through some software, and getting a fully realized 3D model out of the other side.


My first contact with this technique was using a now-defunct app called display.land. Essentially what this service allowed you to do was create a photogrammetry asset through point clouds generating using video, meaning you didn't have to take several photos and you received live feedback as to what points have been captured successfully.

Display.land

With this service, (I call it a service because the actual software used to be on a server and not existent within the app) I managed t create a very low-quality 3D model of myself. the following media shows the results of these tests.


https://twitter.com/GhostBrush_/status/1276619387420782599

https://twitter.com/GhostBrush_/status/1276586259864002560

I previously touched on the statue base asset I created using photogrammetry (below).



It came to my attention that I didn't really document my workflow on how i came to this conclusion so I set out to create a new one.

It can be hard to pick a good option as some places may greet you with odd looks from strangers wondering why on earth you are walking in circles and photographing the same subject, other locations are simply inaccessible to get the correct kind of shot. 

Fortunately, the streets are much less populated due to ahem...
...Current world events...
...ahem...
...So the odd looks are not so much of a problem.

Recently on one of my walks around the city of Liverpool blessed me with this cheeky and completely harmless piece of petty vandalism. (below)
Before I get in trouble I must stress that whoever this rogue statue masker is, it isn't me. 


Now in my research, I couldn't find much about Henry Cotton, the man in the statue. I know he was the first Chancellor of the University I attend, and it's hard to say how he would have felt about this. I like to think a man of education would support the use of facemasks in our current time so all I can do is hope he would approve.

 here is a picture of him without a mask courtesy of https://artuk.org/
To capture this statue in a 3D sense the first thing you need is a camera and the patience to walk in concentric circles for a bit.

Above shows roughly what you're aiming for, many of these images turned out unusable, thus is the struggle of a photogrammeter. Once you have a nice array of images from all different angles it's time to plug it into your software of choice. I'm using 3DFZephyr for this, however, you can use whatever you like.



First things first you need to generate a Sparse Point Cloud. this is just a loose scattering of duplicate anchor points. Basically, the software finds similarities between the photographs and pins them for further detail extrapolation later.


this is actually a much cleaner version than what initially generated, much of the surrounding architecture was captured so I gave them the snip and moved on to the next stage. To achieve, this stage is actually one button press and a lot of waiting and looking at stuff like this:

Riveting stuff indeed.

However, this stage is actually doing a couple of stages at once without you needing to see. The first pass is the Dense Point Cloud:

This is basically the process of subdividing the Sparse Cloud and coming to accurate estimations of volume based on the source images. this is basically coloured dots or Voxels and no geometry is yet present. that's where the meshing begins.

And now we have Geometry! Granted at this point its geometry only a mother could love, but geometry nevertheless. you may start to notice certain hiccups or errors, but this is not the time to worry about that, what is important is that we created fairly accurate volumetric data and turned it into data that a game engine might be able to understand. The next step makes that a reality. we just need to look at a little more of this...

...While the software creates a fully textured mesh from the initial mesh

After giving the grass a little trim I've now got a fully textured mesh ready to fix in blender. there's a gleaming issue with the cap that needs sorting out, often you find this sort of things when the subject is taller than you are.

with some cleanup, you have a final model ready to show off to your friends and use however you please! here is a render and an interactive model viewer of the final result.

If you look at the top of the hat it is clear that some repair work has been done, however, I think it is important to ensure these repairs are done in order to sell the idea of the physical counterpart. I've noticed in both my large scale photoscan attempts, that the top is always the part that needs the most work. I suppose a real pro in this field would have some sort of crane or lift to make sure every angle is captured, however, I'm doing this guerilla-style so I don't mind a little cleanup here and there.

Also upon uploading this piece I noticed that I am not the first to use this statue as a photoscan here are examples of other people's attempts.
https://sketchfab.com/3d-models/final-henry-cotton-statue-high-64964d9e69bf4b8fbbc6043c9dc96d60
https://sketchfab.com/3d-models/henry-cotton-statue-af948b38ae924425945b0418c72036bd
These are both really cool and capture the detail in different ways. I suspect the kind of camera used and views selected have an effect as well as differences in weather etc. so there are many avenues to explore in what makes a good scan.

 note to self, I must look up other attempts before I do my own, rather than after,  as this may have shown me what to watch out for and what works well.

I'm going to end this post with a picture of the statue from better days. here's to hoping we'll see him like this again soon.


References

https://artuk.org/discover/artworks/henry-e-cotton-esq-first-chancellor-of-liverpool-john-moores-university-1992-65543https://en.wikipedia.org/wiki/Henry_Egerton_Cotton
https://sketchfab.com/

https://www.3dflow.net/3df-zephyr-photogrammetry-software/

Report

LJMU Immersive Research 3: AI is stealing my job edition

General / 04 January 2021



Artificial intelligence is something that both fascinates and terrifies me. While AI passing the Turing test is probably a long way off yet, the fact that AI is infiltrating our workplaces with alarming frequency is no longer science fiction. In the previous decade, the idea that an AI may be able to create appealing artwork probably would have been met by me with outrage, anxiety, and apprehension, yet now it seems to be more real than ever.
If you don't believe me here is a website dedicated to selling unique artworks generated by AI https://www.artaigallery.com/

Ever the optimist, I choose to see this as more of a "can't beat em join em" kind of situation. In this research, I'll aim to understand how AI can help better us as artists rather than replace us entirely.  

Awaiting human input, digital painting by myself.



First I want to be very careful in my definition of AI, as it has become internet shorthand for a lot of things. What I am specifically talking about is anything that uses neural network machine learning.



 I see the ability to learn from past successes and failures in some way to overcome a current task as the most valuable foundation of our human intelligence. It's the basis of most other forms of intelligence; whether that be emotional intelligence, academic intelligence, and so on. To have grown intelligence in a field, at some point, you did something well. You then took the endorphins and used them to connect the neurons associated with whatever you learned with a positive experience, associating that pathway with a positive outcome.  Likewise, at some point, you must have messed up. In this case, the neurons associated with this memory are negative and you feel discouraged to go down this pathway again.


Any neuroscientists in the house may be headbutting their monitors in disgust at this explanation, so bear in mind I'm an artist, not a scientist, however, this simplified concept does appear to be the core basis for AI machine learning.

while a computer cant "feel" the positivity or negativity associated with any certain act, it can be taught to index pathways as positive or negative. a good outcome may be simply numerical, for example, distance traveled in an obstacle course or something more complex and sinister, like a person's likeness. the most successful solutions are often bred with other successful solutions, and at other times total mutational randomness in order to slowly evolve to the best possible solution to the given task.



(Above) Google's Deepmind AI learning how to get a puppet to walk without ever seeing a walk cycle

(Above) Comedian Jordon Peele in association with BuzzFeed demonstrating how machine learning might be used to a more sinister end.  This is where I believe that AI and machine learning pose a more-than-moderate risk to our ideas of identity and privacy. I believe that in coming years a lot more regulation will be associated with so-called "DeepFakes"


ARTBREEDER




https://artbreeder.com/  is one of those services that has you yelling "how is this free????" into the aether. essentially through machine learning, you can "breed" artwork with other artwork to create new artwork. it seemingly cuts the process of actually painting a character or landscape into a process of unnatural selection. you see in this case the process that selects whether an outcome is good or bad is in control of the user, even allowing for individual "genes" to be altered before committing and "breeding" the next generation. 

(Above) this cool foreboding fortress is 100% unique to me based on minute decisions made over several generations. The images below show this artwork's "ancestors".

This is the earliest ancestor, an artwork made by myself in Cinema4D

This is the AI's interpretation of my piece, a lot of information is lost due to the chaos of the process however the water and architecture placement remains the same throughout the process


this is where the piece became the most chaotic, however, you can see the beginnings of the focal tower.


This is where the piece transforms into its current composition. the final stage was more to do with style.

 While it is fair to say that in its current chaotic state it certainly won't compete with the most talented artists out there, I think it's a fantastic companion to quickly generate early concepts or visualizations. I also see there is a real time-saving element to this and future versions of this. while it is very difficult to control it to the point where you are designing what is in your head, I think the chaos really brings a lot of "happy accidents" to the mix.

What i find quite odd is the portrait generator is actually much easier to control, perhaps due to how facial landmarks are much more consistent across the board than geographical landmarks. I was briefly reminded of https://thispersondoesnotexist.com/, a frighteningly convincing image of a human being generated with a press of the refresh button.


the following video is an experiment I conducted in artbreeder involving the same face with the same genes but from different angles, and gives an idea of how well the AI grasps the concept of a likeness. The idea here was it may help artists understand what their characters need to look like from certain angles.   


this character was also generated with many generations of... grandparents i suppose? with minor changes each time. 

It's quite hard not to see this as a little unsettling, like breeding the perfect human. There are even sliders for racial percentage which opens more questions than I care to get into on this post.
 Nevertheless, It is an interesting tool I could see being used. In fact, Travis Davids used it for his visualization of real-life south park characters here:
https://www.artstation.com/artwork/D5zRxn
In his description, he also outlines many other AI techniques used in his project, so it is at least a testament to how AI may help us as artists visualize to a much higher level.
Personally, I feel that a computer's grasp of creativity is chaotic and random, and our input as curators is what guides them to the path we select. I will conduct more investigations to see how this can be explored further

References:
https://www.artaigallery.com/
https://www.artstation.com/artwork/D5zRxn
https://youtu.be/gn4nRCC9TwQ
  https://artbreeder.com/ 
https://youtu.be/cQ54GDm1eL0

Report