Website powered by

LJMU Research 4: A more in-depth look at photogrammetry

General / 06 January 2021





I briefly touched on photogrammetry in a previous post. I find this method to be particularly interesting as it feels like the possibilities are endless. The whole world is basically an open-source asset library, in theory, anything can be digitized and used however you please. The method involves photographing a subject from many angles, running it through some software, and getting a fully realized 3D model out of the other side.


My first contact with this technique was using a now-defunct app called display.land. Essentially what this service allowed you to do was create a photogrammetry asset through point clouds generating using video, meaning you didn't have to take several photos and you received live feedback as to what points have been captured successfully.

Display.land

With this service, (I call it a service because the actual software used to be on a server and not existent within the app) I managed t create a very low-quality 3D model of myself. the following media shows the results of these tests.


https://twitter.com/GhostBrush_/status/1276619387420782599

https://twitter.com/GhostBrush_/status/1276586259864002560

I previously touched on the statue base asset I created using photogrammetry (below).



It came to my attention that I didn't really document my workflow on how i came to this conclusion so I set out to create a new one.

It can be hard to pick a good option as some places may greet you with odd looks from strangers wondering why on earth you are walking in circles and photographing the same subject, other locations are simply inaccessible to get the correct kind of shot. 

Fortunately, the streets are much less populated due to ahem...
...Current world events...
...ahem...
...So the odd looks are not so much of a problem.

Recently on one of my walks around the city of Liverpool blessed me with this cheeky and completely harmless piece of petty vandalism. (below)
Before I get in trouble I must stress that whoever this rogue statue masker is, it isn't me. 


Now in my research, I couldn't find much about Henry Cotton, the man in the statue. I know he was the first Chancellor of the University I attend, and it's hard to say how he would have felt about this. I like to think a man of education would support the use of facemasks in our current time so all I can do is hope he would approve.

 here is a picture of him without a mask courtesy of https://artuk.org/
To capture this statue in a 3D sense the first thing you need is a camera and the patience to walk in concentric circles for a bit.

Above shows roughly what you're aiming for, many of these images turned out unusable, thus is the struggle of a photogrammeter. Once you have a nice array of images from all different angles it's time to plug it into your software of choice. I'm using 3DFZephyr for this, however, you can use whatever you like.



First things first you need to generate a Sparse Point Cloud. this is just a loose scattering of duplicate anchor points. Basically, the software finds similarities between the photographs and pins them for further detail extrapolation later.


this is actually a much cleaner version than what initially generated, much of the surrounding architecture was captured so I gave them the snip and moved on to the next stage. To achieve, this stage is actually one button press and a lot of waiting and looking at stuff like this:

Riveting stuff indeed.

However, this stage is actually doing a couple of stages at once without you needing to see. The first pass is the Dense Point Cloud:

This is basically the process of subdividing the Sparse Cloud and coming to accurate estimations of volume based on the source images. this is basically coloured dots or Voxels and no geometry is yet present. that's where the meshing begins.

And now we have Geometry! Granted at this point its geometry only a mother could love, but geometry nevertheless. you may start to notice certain hiccups or errors, but this is not the time to worry about that, what is important is that we created fairly accurate volumetric data and turned it into data that a game engine might be able to understand. The next step makes that a reality. we just need to look at a little more of this...

...While the software creates a fully textured mesh from the initial mesh

After giving the grass a little trim I've now got a fully textured mesh ready to fix in blender. there's a gleaming issue with the cap that needs sorting out, often you find this sort of things when the subject is taller than you are.

with some cleanup, you have a final model ready to show off to your friends and use however you please! here is a render and an interactive model viewer of the final result.

If you look at the top of the hat it is clear that some repair work has been done, however, I think it is important to ensure these repairs are done in order to sell the idea of the physical counterpart. I've noticed in both my large scale photoscan attempts, that the top is always the part that needs the most work. I suppose a real pro in this field would have some sort of crane or lift to make sure every angle is captured, however, I'm doing this guerilla-style so I don't mind a little cleanup here and there.

Also upon uploading this piece I noticed that I am not the first to use this statue as a photoscan here are examples of other people's attempts.
https://sketchfab.com/3d-models/final-henry-cotton-statue-high-64964d9e69bf4b8fbbc6043c9dc96d60
https://sketchfab.com/3d-models/henry-cotton-statue-af948b38ae924425945b0418c72036bd
These are both really cool and capture the detail in different ways. I suspect the kind of camera used and views selected have an effect as well as differences in weather etc. so there are many avenues to explore in what makes a good scan.

 note to self, I must look up other attempts before I do my own, rather than after,  as this may have shown me what to watch out for and what works well.

I'm going to end this post with a picture of the statue from better days. here's to hoping we'll see him like this again soon.


References

https://artuk.org/discover/artworks/henry-e-cotton-esq-first-chancellor-of-liverpool-john-moores-university-1992-65543https://en.wikipedia.org/wiki/Henry_Egerton_Cotton
https://sketchfab.com/

https://www.3dflow.net/3df-zephyr-photogrammetry-software/

Report

LJMU Immersive Research 3: AI is stealing my job edition

General / 04 January 2021



Artificial intelligence is something that both fascinates and terrifies me. While AI passing the Turing test is probably a long way off yet, the fact that AI is infiltrating our workplaces with alarming frequency is no longer science fiction. In the previous decade, the idea that an AI may be able to create appealing artwork probably would have been met by me with outrage, anxiety, and apprehension, yet now it seems to be more real than ever.
If you don't believe me here is a website dedicated to selling unique artworks generated by AI https://www.artaigallery.com/

Ever the optimist, I choose to see this as more of a "can't beat em join em" kind of situation. In this research, I'll aim to understand how AI can help better us as artists rather than replace us entirely.  

Awaiting human input, digital painting by myself.



First I want to be very careful in my definition of AI, as it has become internet shorthand for a lot of things. What I am specifically talking about is anything that uses neural network machine learning.



 I see the ability to learn from past successes and failures in some way to overcome a current task as the most valuable foundation of our human intelligence. It's the basis of most other forms of intelligence; whether that be emotional intelligence, academic intelligence, and so on. To have grown intelligence in a field, at some point, you did something well. You then took the endorphins and used them to connect the neurons associated with whatever you learned with a positive experience, associating that pathway with a positive outcome.  Likewise, at some point, you must have messed up. In this case, the neurons associated with this memory are negative and you feel discouraged to go down this pathway again.


Any neuroscientists in the house may be headbutting their monitors in disgust at this explanation, so bear in mind I'm an artist, not a scientist, however, this simplified concept does appear to be the core basis for AI machine learning.

while a computer cant "feel" the positivity or negativity associated with any certain act, it can be taught to index pathways as positive or negative. a good outcome may be simply numerical, for example, distance traveled in an obstacle course or something more complex and sinister, like a person's likeness. the most successful solutions are often bred with other successful solutions, and at other times total mutational randomness in order to slowly evolve to the best possible solution to the given task.



(Above) Google's Deepmind AI learning how to get a puppet to walk without ever seeing a walk cycle

(Above) Comedian Jordon Peele in association with BuzzFeed demonstrating how machine learning might be used to a more sinister end.  This is where I believe that AI and machine learning pose a more-than-moderate risk to our ideas of identity and privacy. I believe that in coming years a lot more regulation will be associated with so-called "DeepFakes"


ARTBREEDER




https://artbreeder.com/  is one of those services that has you yelling "how is this free????" into the aether. essentially through machine learning, you can "breed" artwork with other artwork to create new artwork. it seemingly cuts the process of actually painting a character or landscape into a process of unnatural selection. you see in this case the process that selects whether an outcome is good or bad is in control of the user, even allowing for individual "genes" to be altered before committing and "breeding" the next generation. 

(Above) this cool foreboding fortress is 100% unique to me based on minute decisions made over several generations. The images below show this artwork's "ancestors".

This is the earliest ancestor, an artwork made by myself in Cinema4D

This is the AI's interpretation of my piece, a lot of information is lost due to the chaos of the process however the water and architecture placement remains the same throughout the process


this is where the piece became the most chaotic, however, you can see the beginnings of the focal tower.


This is where the piece transforms into its current composition. the final stage was more to do with style.

 While it is fair to say that in its current chaotic state it certainly won't compete with the most talented artists out there, I think it's a fantastic companion to quickly generate early concepts or visualizations. I also see there is a real time-saving element to this and future versions of this. while it is very difficult to control it to the point where you are designing what is in your head, I think the chaos really brings a lot of "happy accidents" to the mix.

What i find quite odd is the portrait generator is actually much easier to control, perhaps due to how facial landmarks are much more consistent across the board than geographical landmarks. I was briefly reminded of https://thispersondoesnotexist.com/, a frighteningly convincing image of a human being generated with a press of the refresh button.


the following video is an experiment I conducted in artbreeder involving the same face with the same genes but from different angles, and gives an idea of how well the AI grasps the concept of a likeness. The idea here was it may help artists understand what their characters need to look like from certain angles.   


this character was also generated with many generations of... grandparents i suppose? with minor changes each time. 

It's quite hard not to see this as a little unsettling, like breeding the perfect human. There are even sliders for racial percentage which opens more questions than I care to get into on this post.
 Nevertheless, It is an interesting tool I could see being used. In fact, Travis Davids used it for his visualization of real-life south park characters here:
https://www.artstation.com/artwork/D5zRxn
In his description, he also outlines many other AI techniques used in his project, so it is at least a testament to how AI may help us as artists visualize to a much higher level.
Personally, I feel that a computer's grasp of creativity is chaotic and random, and our input as curators is what guides them to the path we select. I will conduct more investigations to see how this can be explored further

References:
https://www.artaigallery.com/
https://www.artstation.com/artwork/D5zRxn
https://youtu.be/gn4nRCC9TwQ
  https://artbreeder.com/ 
https://youtu.be/cQ54GDm1eL0

Report

LJMU Immersive Research 2

Article / 15 November 2020


It has been some time since my first posting on this Masters, and in that time we have learned a lot about what it means to be a working artist within the field of immersive. Some of the main topics we have covered have ranged from technical skills such as projection mapping, 360 videos, and Unreal engine to more nebulous ideas such as the art of storytelling itself.

I like to keep my options open and explore every avenue so this will be a more broad net of what I've been looking at. I want to focus on something specific later on.


With the tools and techniques presented to me, I have been performing small experiments to get to grips with the work.
I will start with my research and show experiments towards the end. 


1 Research

this section will outline my main points of research including video links and my general opinion in terms of their application within the field. 

1.1 Projection Mapping

2019 ST Raphael - Projection map- Mathieu Martin 

Projection appeals to me as it holds great power for relatively old technology. It can be completely transformative to spaces, architecture, and objects.

I have a couple of main points of reference when it comes to projecting images, first is this amazing interactive piece that calculates the height of the sand in a sandpit and projects map topology accordingly. The illusion holds up really well. I like the sand being a medium between the digital and the physical.

Similarly, this topological projection was achieved by another group. This time, however, the end was not educational, but retail-focused. It hones in on the idea of "retail therapy" creating an immersive experience from the rather mundane task of purchasing footwear. It brings to mind a future where your retail outlet becomes something more like a gaming experience.

I like these two videos specifically as they show how a simple idea such as topographical lines may turn something physical into an interactive experience. 

1.2 VR/360 Video


VR Station - Digital Painting - Simon Stålenhag
360 and VR both face similar challenges in the direction of a viewer's attention. VR thankfully has years of game theory to fall back on, 360-degree 3D environments are no new concept to the world of game design. This too can link back to 360 videos, the direction of a viewer's gaze is highly important. a great article about how to do this can be found here https://uploadvr.com/vr-film-tips-guiding-attention/

For me 360 VR is certainly something I would like to see more of in the future, however, I'd like to avoid them for this project. my biggest concern with it is hygiene during the most hygiene conscious time of recent history. I simply don't think a headset is a viable solution to an art installation. Even without the hygiene concerns, I would like to avoid an experience the viewer can have at home.

That being said, I do see a future in commercial VR products. Not only does it lean on pre-existing game design ideas, but in a way, it expands the user's living space virtually anywhere imaginable.



1.3 AR/MR


AR and MR are very exciting to me for an admittedly very childish reason. Namely that it's the closest thing to a hologram we have, and with new advancements in Machine sight, it's more accessible than it's ever been. my primary experiments in this field have involved SparkAR as well as Adobe's new product by the name of Adobe Aero.
I watched some talks at the adobe max conference involving Aero and was frankly blown away by how easy the whole thing was. just assemble your scene and triggers similar to any other 3D software (even simpler in some ways) and save it out and boom, it's on your phone blending with reality. 
More recently I caught a SparkAR talk by the artist collective Keiken named World-building and merging the physical and digital. the talk was very new-age, trippy, challenging of gender roles, and overall something you'd expect to see on Adult Swim at 2 AM. jolly good stuff. they did something that kind of spoke to me.
 They posed the idea of using face filters and MR as a means of world-building. The more you think about it, our digital personas are a world-building exercise, if not world-building then at the very least world-augmenting.
We use filters, selective opinion voicing, selective angles, lighting, moments, and overall decision-making to project our most ideal selves outwards. With the advancement of face filters getting to the point where we can physically sculpt ourselves to any ideal shape imaginable, it's not hard to see a trend of our real-world selves growing ever further from our virtual counterparts. What's interesting about Keiken is that they have taken this and almost turned it into an intentional expressive piece of reality distortion to the point where it becomes fantasy. 
Simple sparkAR example 


With these various talks I have been watching, the general idea I'm getting is that companies are most excited about AR and the possible ways it may enhance our experience in the world. VR still has a ways to go and Projection almost feels low tech in comparison to what you can do with a virtual lens. It not only has the potential to completely alter our view of the reality we physically occupy, but it has all sorts of applications in education, entertainment, fashion, gaming and so on. 


1.4 Web



I will briefly touch on the web as it is a sadly underappreciated facet of the world of XR. What benefits the web the most is accessibility. Most people don't want to install an app, or oftentimes even go anywhere to access the content. The web has you covered, and with CSS and JavaScript being probably some of the easiest coding languages out there, it's amazing as a tool for artistic expression. net.art and ARGs (fake conspiracies made up to send players down online rabbit holes, often just used as marketing for an external product) have been prevalent for many years and new developments in 3D web tools have only enhanced this growth. while I could name many such experiences, I think my favorite of recent memory has been this accidental game created from a virtual tour of a house.
8800 Blue Lick Road, from what I have found with proxies to American news sites, was once home to a mafia fencing racket. Boxes were stolen, presumably from trucks and warehouses, and stored here. The owners were incarcerated for their actions, and their home was put on the market without removing the stolen goods. What's even more interesting is that they sent someone in with a 3D camera to take a tour of this labyrinthine property. Apparently, it was an old Baptist church so some oddities in the layout are present, such as a strange walk-in shower room, presumably used for baptism.

The experience, in the end, is a truly unique and unintentionally fascinating narrative of how the criminal underworld live.




2 Experiments


This is a collection of my various experiments in XR including 360 videos, photogrammetry, AR, and rear projection. My trademark wide range of unfocused experiments hopefully will help out with future ideas. I think of all the subjects I have covered, the augmented reality side shouts to me the most so hopefully more experiments within this can come out in the future.

References
https://youtu.be/bA4uvkAStPc
https://youtu.be/07hiEtggHXw
https://uploadvr.com/vr-film-tips-guiding-attention/
https://youtu.be/1F83n8W2JUg
https://youtu.be/K_-zNcTjZh0
https://youtu.be/0DZ0wBjFKg4
https://3d-marketing.captur3d.io/view/keller-williams-louisville-east/8800-blue-lick-rd

Report

LJMU Immersive Arts Research Post 1

Article / 29 September 2020



Today marks the end of the first week of lectures in Immersive arts. we have been instructed to start a research blog to document our journey through what we find interesting within the field, hopefully leading to a final project that ties into the research.



I'll start by laying out what in my understanding of what constitutes an Immersive work of art. To me, an immersive artwork would be one that the viewer feels a part of, a direct influencer on the outcome or journey witnessed within the work. To this end I would categorize most forms of play as an immersive experience, however, the "art" would be in the medium.

 I wouldn't necceserally consider a game of football as an immersive artwork, however, a game of FIFA 2020 might be. In the opposite direction, I wouldn't consider the dreadful 2000 Film "Dungeons and Dragons" to be an immersive artwork, however, the tabletop counterpart, involving characters created by the players and lead on a story through the world of Dungeons and Dragons by a host to indeed be an immersive art. It involves an often improvised narrative given a sense of immersion through the player's imagination and our joint desire to tell stories. No two games are alike, It's totally dependant on every person in the room and their unique goals, senses of humor, and overall personalities.




But art exists outside some dude's basement. the dungeon master can't be present for every patron's visit to the Tate, so what about works that are always accessible yet provide the same sense of taking part? 

Allow me to introduce AI Dungeon; a game that uses artificial intelligence to create a narrative similar to how a human Dungeon Master would. Presentation-wise, it's basically a text-based adventure game, however, instead of being hit with an "I don't understand that" when presented with unfamiliar verbiage, the system tries its best to adapt to whatever is said. It also doesn't have a predefined structure, instead, it just tries to learn and alter the course of the story based on its database. 


Naturally with the nature of AI, there are some certain discrepancies with the logic, like how the diplomats get away from the player character in the carriage that he is hiding in. This doesn't really ruin the overall experience, however, something about a world in which anything you can imagine can happen is so appealing to us as humans that the dodgy AI and the text-based limitations kind of melt away into almost an imagination assistant.


Another important facet of Immersive art and storytelling is the technological group known as XR. this incorporates VR MR and AR, technologies that in some way try to fool our perceptions of reality to incorporate some digital element. The viewer is immersed in the art as their mind perceives the digital to be real. This blending of the digital world and the real world is becoming more and more prevalent and is most commonly found in apps on our phones, VR headsets in our games, and displays in our everyday life.


During my research before this course, I found out about a technique known as Pepper's Ghost. To explain it simply, shadows do not reflect as they are not the result of a ray, they are the absence of light rays. Light does reflect and bounces around freely, therefore a "hologram" display can be made with a projector and a plane of glass or reflective transparent plastic.


This is what we saw at the 2012 Coachella performance featuring Tupac and Snoop Dogg on the same stage 16 years after Tupac's death. 



I found this so interesting that I ordered a cheap little plastic pyramid you can attach to your phone and created this in after effects using 4 photos of an anatomically correct skull for medical students I just happen to own for totally non-creepy reasons


A recent example of this sort of fake hologram that I can think of occurred when I left Leicester and me and my friends spent my last day there at the National Space Center. Of all the crazy interesting stuff going on there, there were some really cool hologram displays that caught my eye. they can be seen here. I found this fascinating and whats worse is I have no idea how this works. I couldn't find a projector anywhere and I spent around 10 minutes looking. What's so cool about it to me is how it brings home the space age while actually functionally depicting how these tools work with the animations.

One thing I have been thinking about doing as a final project is something to do with a "smart mirror", basically these mirrors people build from special one way mirrored materials that have a built-in display hooked up to a raspberry PI. The idea is the first thing you see in the morning when you brush your teeth is now also a display for all sorts of useful information. I think it would be cool to combine this sort of technology with an XBOX Kinect to create these virtual 3D spaces within a mirror. It could be a great format for a story about the modern age of digital vanity.

Hopefully, this gives a solid idea of what I know and find interesting, I hope for this course to teach me the necessary tools to get really creative within this field.

References
https://aidungeon.io/play-ai-dungeon/

https://en.wikipedia.org/wiki/Pepper%27s_ghost
https://www.amazon.co.uk/AOWA-Projector-Universal-Smartphone-Accessories/dp/B07RKNZ9BJ/ref=sr_1_2_sspa?dchild=1&keywords=plastic+pyramidhologram&qid=1601401460&sr=8-2-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzOVNEOFpCREswN1JIJmVuY3J5cHRlZElkPUExMDQ0NTMwMjNINTI3NFVYUkdJMiZlbmNyeXB0ZWRBZElkPUEwNDQzNzEzMVBNNFBUUlkzT0RKTSZ3aWRnZXROYW1lPXNwX210ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=
https://www.youtube.com/watch?v=RWjvJq4Zabk
https://www.youtube.com/watch?v=GYXTUf6UBUo

some more cool stuff
https://twitter.com/RenatoMunari/status/1288197181229486081
https://twitter.com/duck/status/1291310189401059328
https://twitter.com/rich_lord/status/1291061786712580096
https://twitter.com/larrykim/status/1286414031721394177
https://www.youtube.com/watch?v=oCwE5ayHgjM


Report