Website powered by

Robot's-Eye view

General / 25 September 2021

The first introduction I had to the world of art was an old surrealist book my mother kept from her art course before she had me.  The book was challenging for a young kid to wrap his head around. Up until then I had been drawing goofy characters from Scooby-Doo that would end up on the fridge for a week or two. To be exposed to Salvador Dali and Juan Miro made me question at a young age why we make art. Far from the comforting arcs of Saturday morning cartoons, these surreal dreamscapes would frighten the viewer with a sense of bleakness and meaninglessness. This was to my memory the first significant challenge to my views on art.  

 Ever the empath, my mother told me that they were expressing how they feel inside. That sometimes if you feel like things don’t make sense, then we should share that feeling with someone else. Maybe that way we can come to an understanding on how to make it better.   Later in life I was in my final year of college. I was studying fine art. During contextual studies we were posed with the following question  

 “What is Art?” 

Fairly simple. Art is art obviously? 

The point in this question of course, was that art is subjective. Everybody's answer is as wrong as it is right. It’s up to the individual to choose how they enjoy art, whether it be purely aesthetic, or deep analysis. Art can even be read in ways the artist never intended, and this would still be a fair interpretation of its meaning.   

My answer was somewhere along the lines of “Art is expression of thoughts or feelings in whichever way the artist sees fit” 

  It was a good definition for me, as it didn’t exclude anyone. From Botticelli to my scooby doo paintings, to Jackson pollock. Every artist I could think of was expressing something. Be it a sunset, a moment in religious canon or a consumer product; every subject, presented in every medium was an act of expression of one thing or another and was thus art.

Notice that my own definition had the word "artist" in it. 

 The reason I started this with an anecdote is to illustrate my motivations towards this subject matter. That my world view is constantly challenged by how much art has changed since I gave that answer. With the advent of machine learning and "AI", images dreamed up entirely from advanced neural networks are making their way into the art space. If a machine can make art, is it an artist?

This idea first presented itself to me as short videos on my Facebook feed showing a mechanical arm holding a graphite pencil, which would “draw” various portraits and scenes.

 The videos often came accompanied with some sort of clickbait title along the lines of “IS THIS IT FOR ARTISTS?!?!?!” "IS THE HUMAN RACE DOOMED?!?!?"
to both of which the answer was a resounding "no".

 Of course, this “mechanical artist” was nothing more than a fancy printer, creating whatever the human artist told it to. It wasn’t until I was exposed to tools such as Adobe's "content aware" tools and “AI” image interpretation tools such as http://nvidia-research-mingyuliu.com/gaugan/ that the idea of a machine being able to crate imagery through recognising patterns became more prevalent.  

The following twitter page shows an artist who uses machine learning to generate images from scratch https://twitter.com/images_ai?s=21 

These loose, abstract, yet familiar scenes can often feel full of human emotion. Some evoke fear, others, a sense of nostalgia. These end products are the result of huge datasets, chosen and curated by an artist or programmer, which are then broken down into a set of sliders that denote different aesthetic decisions (often in a quite nonsensical manner). The process works in generations, meaning each step is closer to the final piece. I recommend trying https://www.artbreeder.com/ for an idea of how the technology processes image making.  It’s a kind of alchemical mixing process.  

 With these sorts of network, a symbiotic relationship between the artists and the networks arises. The artist acts as disc jockey, remixing the collective artistic decisions of those whose work is in the dataset with the power of a neural network forming what it considers to be a sum of all parts.  While the dichotomy between artist and machine is present, another party is involved.  Each resulting piece of art is seemingly 25% artist, 25% machine and 50% the art of those in the dataset. This “standing on the shoulders of giants” element of the process is also true in traditional image making too – everything a person makes is a combination of things they have seen previously. 

The problem is, a human being will replicate what they have seen imperfectly. Each artist works through their own ego, creating variety in works between individuals. A machine has near perfect recollection, and no sense of self. Two machines given the same data and set on the same task would invariably create the same image. Do we have a right to use the collective aesthetic decisions of artists in this manner?

 If we look to the future, it is interesting to wonder if the collective artwork of human history would be enough basis to express a self-aware machine’s innermost thoughts.  

These were among the questions I want to ask with my work, but mainly my focus was "where does technological advancement leave artists?" Or more "where does it take them?" 

As Henry Ford demonstrated, seemingly sophisticated, expensive, outlandish technology can become an everyday piece of consumer product if it makes life easier. And it can completely change the world as we understand it in the process.

The following article talks about a piece of ‘AI’ art that sold for $432,500 https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx?sc_lang=en 

I want to draw attention to this extract from Dr Ahmed Elgammal 

“There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists — one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art.” 

 What Dr Ahmed Elgammal illustrates is that a machine can use input information to create images, but a machine doesn’t have a “self” to express. The empathic responses of images created with this process are from the human being. Much like a director and his crew, the human being leads the network towards a satisfying final product.  

 In the extract He states, “The whole thing is the art”. I find this to be a good insight, as it highlights that we as an audience seek not just a final product, but some sort of narrative involved with the process.  

With this subject in mind, I wanted to present “Immersive Arts” as art and the artist immersed in digital collaboration. 

 While true artificial intelligence is still in its primordial soup, and beyond my understanding, I sook to create work that ask these sorts of questions through imagery. Interpretation and observation are some of the most foundational forms of artistic expression. Artists such as van Gogh and Picasso, have particularly unique or nonconventional methods of representing the world around them and are often seen as visionary. My interest is in the ways in which a machine’s nonconventional expressions of reality can give the machine the appearance of a true artistic collaborator.  The project changed from the initial proposal, but what I kept was the idea of “Rhopogrammetry”, A portmantua of photogrammetry, the 3D reconstruction of scenes through photography, and Rhopography, the study of everyday clutter.  

the reason I chose everyday things as the subject is that it comes from a long tradition of artists. From Cézanne’s still life to Marcel Duchamp's ready-mades. The world around you tells a story. Ideas of form and function are deeply connected to our understanding of every day objects. This is a comfortable ground to experiment from, as it is quite low profile as “artistic voice” goes. I wanted the machine to talk louder than I did so to speak. 

As for the machine, photogrammetry using 3DF Zephyr was an important starting point. If you read my previous entries, you will know that photogrammetry is incredibly difficult to get right with a basic setup. I'm hoping that I can find some sort of intrigue from those errors. the mistakes and imperfections of digital reconstruction show the machine’s unique world view

My focus then grew on a series of experiments that aimed to blend the machine’s imperfect view of reality with my own. 

 My first port of call was collecting various photo scans of things around my apartment. the first was a carefully assembled still life. the resulting photo scan - admittedly - was a mess, however it wasn't my place to tell the machine how to "express itself". 

ok I messed up the scan

Its roughness and bumpiness would have generally been avoided by a painter or sculptor, but I would argue that my human error pushes a more distorted, digital result. The interesting thing with a machine recreating an image as apposed to a human being is that a machine doesn't get caught up in symbology. where a human being might assemble this image by clearly distinguishing the "skull" along with all its meaning and subconscious connotations, the machine simply arranges polygons in a mathematical manner and projects photos onto it. the result can be a sludgy, rocky, surreal equivalent. The

while working on this particular experiment, I found a software by the name of Ebsynth. it seems to marry artist and machine perfectly, using video codec logic to warp custom frames to the motion of videos. I created a turntable animation of the underlying 3D and created the following frames to overlay

these frames provided a painterly style, as well as the more human notions of meaning, separate objects and symbology. when fed through the software, the median frames were generated creating the following effect



decisions I never would have made can be attributed to none other than the computer and its imperfect approximations. Of course, these systems were created by humans - so ultimately it's human error, however, this was never anybody's intention. If this project is a metaphorical, mythological journey into the artist inside the machine, then I can think of no better expression of this.

While I liked this experiment, It was carefully arranged, and later painted by me. The top layer is human art, i wanted the machine to talk first.

I often hear photogrammetry referred to as "painterly", so the next step for me was to allow the machine's "style" come through in some way, so I set out to create a piece involving my desk. With these subsequent experiments i tried not to interfere with the space in the way it was presented. The setup here was how i had it set up for working. I had a feeling that if i start to interfere with the arrangement in order to create something artistic, it draws the spotlight on me not the machine.

One thing to note is that the computer that constructed this model in on the desk. Could this be considered a self portrait?


For good measure I also scanned some parts of me and my partner's messy bedroom as an homage to Tracey Emin. 


What stands out to me the most is what the software decides to do when the information isn't so clear. where it doesn't have an angle to project information so it just makes something up or cuts it out entirely. i also like viewing these models from inside, every extrusion becomes inverted, convex become concave and vice versa. the results often challenge our perception of form and function.

When i took this into ebsynth, rather than painting over the machines interpretation, I used the machine's interpretation as the top layer and added some edited footage as the bottom layer. the effect crated is almost psychedelic, the whole world shifts around trying to catch up with itself, things phase in and out of existence. its interesting to see how the layers of the computer's mistakes stack up the feeling of irrationality. For good measure I also slowed the footage and allowed after effects to digitally interpolate between frames. 

I call this piece "using the computer while under the influence of narcotic substances"

my place in this piece is still  obvious. the recursive loop is intended to appeal to a human viewer and create meaning and rhythm. What I refer to as the computer's mistakes may still be my mistakes. 2 machines with this task would almost certainly would have come to the same conclusion, however another human would have produced different errors.

during my Rhopogrammetric (trademark pending) experimentation, I did wonder what it would be like to put another ghost in the shell, so to speak. Working on human subjects for handheld photogrammetry has been a mixed experience to say the least. in my previous entries I mention how still things are much easier to work with. Slight differences in eye position, or overall stance between shots are natural in people. Again i am going to use this as the machine's voice. 

the first human subject was my partner who has also featured in my previous photogrammetry work.

This has its errors, but this is the happiest I have been with a human scan so far. I'm shocked this isn't a more popular thing to do. I could imagine someone having a 3D scan portrait of their grandma in her 20s lying around on the family supercomputer. Or how about 3D scans of important historical figures to refer back to. A portrait gains dimension when processed this way.

The errors in this image also demonstrate the computer's inability to symbolise with this process. the hair meshes with the cheek and neck in a disturbing way. Of course - to the machine "disturbing" was never the intent, this is no different to the strange artefacts in the still lives. I find it interesting that when it's on a human, tech distortion is much more uncomfortable to us. 

speaking of which...

this was incredibly difficult as it was entirely made out of "selfies", meaning my body had to move in order to get the different angles, but the resulting digital confusion present is certainly what I was looking for.

In the spirit of the previous ebsynth experiment, I relit it to highlight the imperfections in the geometry as an overlay and used footage of myself as a basis. the resulting video is below has a painterly feel that is completely generated by a machine. there is not a single brushstroke in sight, to say it is my style would be a mistake. so the question i want to ask is this

Is it a self portrait, or a portrait? 

I'd like to say that it is a portrait of me by a machine, however any attempt on my part to do so would be metaphorical. To imply any true authorship onto the machine in a this early stage is in the realm of science fiction. Every key decision made by me was in order to create some empathic response in the viewer. That is my human ego at play. even the errors I highlighted as works of the machine were left in, drawn attention to, and assigned meaning by me.
Yet in a similar way to the neural network image generators, every human artist has a visual library that they borrow from. thought and behaviour are reactions to information taken in by our senses. We are machines of our environment, creating art to appeal to each others unique coding. 

As we look to the future it's hard not to wonder what the world of image making will become, especially with the economic boom of NFT art giving AI developers a financial incentive for rapid image production. 

I hope this has been insightful into the area of work that has held my attention for these past months. Art for me is about empathy, while machines are far from a sense of self comparable to ours, it still makes me wonder...

…what will the first truly artificially intelligent artist have to say about us?



LJMU Immersive Summer Project - VR Escape Room

General / 23 April 2021

For my final project in this master's, I want to create a Virtual Reality experience using some techniques I learned previously as well as some new ones. Techniques like 3D modeling, animation, and photogrammetry are all skills I have researched and put into practice, however, my goal for this course was to create an immersive storytelling experience. 

The next logical step up from animating an object is to then give it behavior and interactivity. This, like most practices in technology, is an inevitable path to the terrifying world we call programming. Personally, I am no wizard when it comes to programming, however, in this culture of idea sharing, experiences can be made without the need to learn a whole language. Many foundational pieces of code come prefabricated within Unity, and many more can be installed through the Unity Store on a modular basis. I'm hoping that with this and my ropey knowledge of C# I can put together a solid experience.

Another thing I want to focus on is some of the more thematic elements of my research. What a lot of my explorations revealed to me is that "man vs machine" is at the core of a lot of what we do. This will be woven into the world-building in the project.

 As a species, we have had a deeply spiritual and personal connection with technology ever since the discovery of fire. And as we sat around that fire we told stories. Our insatiable appetite for narrative drives some of our biggest industries, and technology comes up with it.
Machines tell us stories about what's right and wrong, stories about what to buy and wear, stories about current events, stories about spacemen with machineguns.
 With the ever-looming idea of  "The Singularity" separating technology from our control into its own entity; the idea that technology can autonomously decide, with its own voice, what some of these stories are is both fascinating and terrifying. This is going to be the central premise of the experience.

The reason I am choosing to make this part thematic is that in a sense it's still science fiction. We have a lot of stories about how humans are using technology to manipulate people, but "True AI" is in its infancy. A lot of our online lives are swayed by blind algorithms as explained in this Jaron Lanier interview, however, the end profiteer is always a human being. The other reason I am choosing not to go practical with these themes is that even if I did have the know-how, media indoctrination caused by the advent of technological free will isn't particularly what I'd like to be remembered for. My motives are more those of the court jester than the mad scientist. 

Gameplay

Remember long ago in the far-off time of 2019 when a business model of constraining a group of people in a room together was a viable business model? We had these things called escape rooms, in which a group of weary travelers would pay to be locked in a room in which the key to getting out is hidden behind a series of puzzles.

The following video demonstrates a sort of virtual escape room that's gained popularity with VR enthusiasts. 

This setup ticks many boxes for this project, first and foremost being scope. The small size of an escape room means that it's good to develop in my small student room, but it also means that I don't have to create too many assets either. no vast skyboxes, just a single room and a way out. If anything the claustrophobia acts as a motivator in this situation.

The second box it ticks is environmental storytelling. For a puzzle game such as the one I am proposing, world objects can be the tools by which the player solves problems. I find one of the biggest draws to virtual worlds is what the game critic Ben "Yahtzee" Croshaw called "messin with the set dressing". The environment can be engaged with on a personal basis, the mise en scene can be picked up, tossed aside, stacked up, or thrown out of a window. The following video by Jacob Geller talks about how interactable objects give us a better window into the world's built-in games. 

 around the 9:50 mark, he talks about "Rhopography" from the greek Rhopos, the study of trivial, mundane, and small objects. he likens this to still lives and what they tell us of a time or a lifestyle. This struck a chord with me as my studies have previously gone into the readymades of Tracey Emin and Marcel Duchamp, and how photogrammetry is a tool to capture that which already exists. If small ready-made objects can tell us stories in this experience then I'm coining the term "Rhopogrammetry" as the process of digitally capturing everyday clutter going forward.
If anything, I would like this project to be an exercise in photo-scanning mundane items and allowing the player to interact with them as they could in the real world. if there are moving parts then it would be interesting to try to digitally equate that.

The final reason I want to go with an escape room is the possibility of branching narrative. With this smaller scope, I think the possibility of a puzzle having multiple solutions that have different outcomes within the story is more viable. this is not concrete per se, further down the line it may become apparent that a linear story is more achievable, however, the prospect is exciting.

Story

The story I'm interested in telling assumes the singularity has happened to a degree in which machines are the ruling class, however, they are merciful enough to let us live, albeit under strict rules. Our protagonist will remain nameless and faceless so as to allow the player to step into their shoes. Our antagonist will be an Artificial agent existing within some small physical object like an Amazon Alexa, who will be taunting the player and revealing more about the world throughout the experience. The Intelligence has decided through your social media presence and other means of surveillance that you are at dangerously low levels of problem-solving skills and must make your way out of this room in order to prove your usefulness to society.
This is a loose premise for now, but enough to go on. I intend to work with my brother who is a Voice artist, and a writer who is also a very close friend of mine in order to flesh out the story. I think once the AI's character is fleshed out, the story will be much easier to write. 

Much like a VR escape room, the idea of an AI antagonist is quite well covered in media.

It's hard to forget Douglas Rain's chilling rendition of Hal 9000 in 2001 a space odyssey, this set the tone for a lot more of the rogue AI types to come. What strikes me upon inspection is the sheer number of video games that capitalize on this trope.

GLados from the Portal series

The most well-known perhaps being GLaDOS from Valve's Portal series. Her assumed personality is a result of many smaller "personality cores" which dictate her behavior. To name a few these include the drive to conduct tests, the interest in space, and the recipe for a cake. as these cores get damaged she becomes more irrational and antagonistic. during the sequel, she is removed from these cores and gains her own sense of self. I find these games, which also feature a silent protagonist are much more about the antagonist's arc than the protagonist's.

My theory on why video games have been so attached to this idea is that it represents the structure of games themselves. there are foundational lines of code and game theory that push and pull the players in certain ways. there is an invisible hand that spawns enemies, offers reward, and even offers choices. My suggestion here is that an AI antagonist actually embodies that phenomenon. An AI in charge of a facility makes the automated parts of game design feel like the work of an in-game entity. things being conveniently placed in order for you to solve a problem seems like the work of the antagonist's manipulation, not the game designer's desire for you to progress. Themes of disobeying the rules of the game often manifest in these titles, almost a reference to players acting irrationally being frustrating to a developer.

While it doesn't contain an AI per se, The bizarre metanarrative known as The Stanley Parable, designed and written by developers Davey Wreden and William Pugh, treats the player and narrator's relationship in a similar way. the illusion of choice is played openly and honestly. The narrator tells the story of the titular Stanley, and whether or not the player goes along with what the narrator tells them dictates the path of the story. It intentionally draws attention to its own artificiality in a way that is comparable to the use of AI in other games. The themes of disobedience which in games such as portal, also take place here.

without spoiling too much, the dry sense of humour throughout the game both parodies and utilizes many of the core decision-making mechanics in games and reveals its narrative determinism with its multiple-choice linearity we are used to seeing disguised as freedom. this story is as much Stanley and the narrator's as it is the developers'.

This harks back to the idea of narrative trees as a return to an oral storytelling tradition, in which the listener can interject and change the flow of the narrative. Even the most complex web of story branches has an architect and therefore is at its core predetermined.
Fittingly I still find Machine Learning driven narrative to be the closest return to this tradition in games. I believe that with enough training and advances in machine learning, we will find that machines start telling our stories in games, perhaps based on a loose template made by a writer, or perhaps trained on the director's favorite references. Again, this is science fiction and speculation so it will remain thematic, however, if I am to create a branching story it's important to address this.


Feasibility


This project is quite open-ended in terms of the story told. With the involvement of some of my friends, among which are a writer and a voice actor, I hope to flesh out the character of the AI to at least a consistent single branch linear story (multiple branches may be added in if time permits). 

In terms of what I can accomplish on my own, assets are not a problem. I have a very solid understanding of PBR pipelines and 3D animation. Where I may fall flat is in programming. My only real experience with this was a tic-tac-toe game I made for the command window in C#.

I am hoping I can get a lot done just on premade code from the unity store and through tutorials. I may also seek assistance where necessary. 

While this is a potential roadblock, I believe a good project is one that pushes its author into unfamiliar territory.


The biggest concern for me is to avoid something called "Scope creep". Wikipedia defines it as follows

Scope creep (also called requirement creep, or kitchen sink syndrome) in project management refers to changes, continuous or uncontrolled growth in a project’s scope, at any point after the project begins 


In short, it means that it is pragmatic to set a realistic scope and stick to it. I need to know from the start which notes I want to hit and not expand the scope unless the goals are all met. My goals are as follows

1) Create an immersive experience in which a player can interact with objects to solve puzzles. This is the number one goal for a good reason. An immersive experience hinges on interaction. The gameplay must engage first and entertain later.


2) Use the story to talk about the themes of my research. while I have been on this course I have developed a different perspective on our relationship with machines. between machine learning, the ability to change our appearance, and even the ability to change the world around us, their respective future implications on humanity, identity, and reality is what interests me the most as an artist. I would like the narrative to reflect this. 


3)  Tell a story through Voice Acting and Mise En Scene. These are two storytelling techniques that I would most like to focus on for this project. Well-written dialogue and nicely crafted rhopograpic junk can tell the story in a confined environment and also lend themselves nicely to the VR format.

These three goals define the minimum of what I hope for this project to achieve. Until I feel that they are significantly met, I will not add any further goals to these 3 core ideas. Further developments may include branching storyline, music, and multiple levels, but for now, these will remain concepts rather than goals. 



In conclusion, I hope to make a VR escape room experience that engages the viewer and helps discuss our relationship with technology both as a storytelling aid and as our future overlords.

References

https://www.youtube.com/watch?v=E6N4SmUgNgc&feature=emb_logo -VR escape room examples

https://en.wikipedia.org/wiki/Scope_creep -Scope creep wikipedia entry

https://www.youtube.com/watch?v=kc_Jq42Og7Q&t=2s- Jaron lanier interview about Social Media's grip on its consumers

https://tvtropes.org/pmwiki/pmwiki.php/AIIsACrapshoot/VideoGames -TVtropes entry about video games with AI antagonsits

https://www.youtube.com/watch?v=0VmsUXzCFkg&feature=emb_imp_woyt - Stanley parable video review


LJMU Immersive Research 6 prototype write up

General / 20 January 2021

For some time, I have been thinking about how the photogrammetry techniques I have learned link to the art world as a whole. The first obvious link is photography as it is both the foundation for photogrammetry and the outcomes are fairly similar. The big major difference to me is the inclusion of the third dimension. A photograph is traditionally 2 dimensional and with that comes a certain set of rules to do with balance and composition. The result of photogrammetry is something I would compare more to sculpture, so in this case, what would the real-life precedent be for capturing sculptural forms based on pre-existing objects?
Fortunately making sculptures from things that already exist, in itself already exists.

Fountain 1917 Marcel Duchamp

This is an art form known as "ready-made". a source of much contention in the art world, no doubt started as a means of rebellion against the traditions of old, grew to be an entire art form in its own right. The main premise is how in our arrogant and god-fearing days of antiquity, a certain level of prestige was assigned to the label of "Art" which even remains to this day. 

Artists like Turner and Monet often pushed the boundaries of what could fit into this prestige and were met with criticism towards their departure from representation. Further down the line, this led to a lot of questions involving what "Art" actually means and whether to simply be labeled as art held any more intrinsic value than labeling something as a car or a pair of shoes.
These questions were at the heart of the Dada movement and Duchamp presented the above work, Fountain. this is obviously a mockery of the "anything is art" mindset, however, it set the bar to the point where now anything can be presented as an artwork for the public's adoration, or in most cases, disgust.
Personally, I would label anything an artist offers to the public as art, for better or worse. Freedom of expression comes hand in hand with our freedom of speech and all.  I don't see the simple classification of "art" or "not art" as either indicative of value, or up to anyone but the artist. While I would probably not spend much more than 20p going to see this display in person, I believe it certainly opened the doors up for later works such as the following :

Unmade Bed Tracey Emin

Another Highly contentious piece by artist Tracey Emin highlights the narrative opportunities held by everyday arrangements of items. while visually unimpressive, this piece offers a window into a person's life that no painting or photograph could accomplish. It makes you stop and think about your own surroundings, your own unmade bed, and think "what story does this tell?". I believe that given enough time a fairly educated guess could be made about a person you never met but have been in their room.
This narrative opportunity is something photogrammetry lends itself quite nicely to. Using the pre-existing world around us we can tell stories in a digital space. 

below is a Lidar scan I have done of my room, I invite the reader to explore it and look for a story and information based on where i spend 95% of my time.

During a pandemic, inviting others around for a quick cup of tea is not particularly an option. It is my belief that this cuts out such an important part of getting to know people. the story of another person's living space is closed behind law and disease, so it is my belief that exploring the digitization of living space is important on a social level.

I have offered this reference in a previous blog, however, I would like to take the time to reference https://3d-marketing.captur3d.io/view/keller-williams-louisville-east/8800-blue-lick-rd this virtual tour of an old mafia fencing warehouse, once a Baptist church.

the amount of hours I have spent on this 3D tour is extensive, to say the least, and I would pin this on the storytelling potential I see in it. nothing within this tour is doctored, and it gives a true sense of urban exploration and dread. I would recommend exploring this tour yourself and looking to see what little stories your brain naturally starts naturally assigning to the experience.

One way photogrammetry can be compared to photography is its potential for "collage" style workflows. items captured in different places at different times can be bashed together and given the same lighting scheme. it offers a surreal and sometimes unsettling experience, yet still touches on the storytelling potential of the objects captured. the following prototype aims to use both ready-made ideas and 3D collaging in order to create a pseudo interactive collage.
https://jakehatt.github.io/


References
https://3d-marketing.captur3d.io/view/keller-williams-louisville-east/8800-blue-lick-rd

Unmade Bed Tracey Emin

Fountain 1917 Marcel Duchamp

LJMU Research 5: Photoscanning a Human Subject

General / 08 January 2021


I've grown quite fond of photogrammetry as a workflow. I feel that at present I am just scratching the surface of what it can be used for. Currently, I have only really tackled non-living subjects which makes my job easier as a photogrammeter, however, I would like to push this even further and tackle a living subject. Luckily I happen to share a flat with a living subject. I would like to introduce you to my life partner, Amie Woodroffe.

photo of her I took on a photography course

In my preliminary research, I found that human beings typically work best in a setup like this



courtesy https://blog.twindom.com/
The benefit of this setup is the time between each camera position is instant. the model can pose in a more extreme or exaggerated manner without worrying about minor shuffles between shots. 

This would be the ideal situation for me but the blog I found this image on suggests this would cost tens of thousands of dollars, and as I mentioned in my last post we're doing this thing guerilla-style. such luxuries remain in the hands of the real pros. my goal here really is to see how far a man-and-a-camera can take this. 

In my last post, I also commented on how various factors may affect the outcome of a photo scan. for the sake of my future self, here are the main conditions that may have affected this shoot

Camera: Nikon D3500

Time of day: 3:30 pm

Season: Winter

Cloud coverage: Overcast

Other weather conditions: Light drizzle


I predict the fact that it was overcast may have a positive effect on the outcome. Diffused light seems to be much better as it provides minimal lighting data, with no strong shadow. this is good as it allows you to light the model how you please after the fact, and doesn't result in conflicting lighting. 

Upon completing the shoot I would like to note that for the sake of the model, try to avoid weather types that make them uncomfortable. not only for their own sake but discomfort tends to make people shuffle around which may ruin the reconstruction. . Again this is where a fancy studio would be better.

here are the photos used for the photo scan, out of these only 2 turned out to be unusable which is a good outcome in my book.

Another disadvantage to using outdoors as a setting is how the software often confuses background points for foreground ones, contrary to this however is that it also can be an advantage in reconstructing a scene as there is more data for the software to cross-examine between shots. I have seen examples of people doing head scans against a greenscreen so this may be an avenue for exploration.
I also found this article including methods for just a head scan and would also like to investigate the techniques outlined. 

https://adamspring.co.uk/single-post/2017/08/30/Single-Camera-Head-Scanning-Photogrammetry


over in 3DFZephyr now and this areal shot is the outcome of the sparse point cloud. The software has done a great job reconstructing the entire scene.
circled in red we can see the bit we are actually hoping to use, so this is a note on the disadvantages of outdoor shooting. the majority of the data produced is actually background, so I snip the scene down to just the circle containing Ms. Woodroffe.


with some snipping, this is the result we are left with the data we are left with is quite a small percentage of the overall scene.


3DFZephyr has reconstruction presets that I will try to explore, here we can see there is a preset for the human body, 

In my experimentation, I found that the best approach with these presets is to start small and generate over multiple passes to make sure the best result is produced

in my attempts, I came across a few failures which can often be quite funny, however frustrating.

the above reconstruction did a great job on the face but her body absorbs the floor texture and the reconstruction produced 2 right arms. 

The opposite is true for this one. the body came out great but the head has collapsed here she is again from a different angle

This is certainly a game of trial and error. There are definitely some notable issues with my photographs used, as I seem to have missed a vital angle between her left profile and her left 3/4 view. this sort of thing gives the software a hard time raveling up. I also think there is a natural shift in a person's stance that is unavoidable. even trees move in the wind so minor movements are bound to happen to the body. I believe post-adjustments will as be necessary for almost all uses of this method on a living subject.



 after many attempts, I came to this

In this version, there are still many issues with the mesh. the aforementioned missed link between her profile and 3/4 view resulted in some caved in geometry around the left cheek. 

the texture produced was also very ghostly and didn't hold detail very well. 

similar caving in was found at the back of the model. a notable ripple existed throughout the


for my previous studies in this technique, this would have been the end of the process. It felt as though I was defeated. "this is why the pros have those fancy studios" I thought to myself


and then I remembered that Blender had a sculpt feature. 

"Maybe we should start manually addressing some of the more gleaming issues and it will look a little better." I thought.

So bouncing between Blender's sculpting tools and a software called Substance Painter I gradually fixed the issues and artefacts that the computer hadn't caught.




Blender's sculpt mode allows you to fix issues with the geometry with a more intuitive sculptors kit, this works much better than traditional polygon manipulation as the generated topology is pretty chaotic.



Substance Painter is a great tool for PBR texturing. For those not aware, PBR texturing is a method used in-game engines, among other things, to output real-time renders. it's a sequence of textures that allow you to describe things like surface texture and whether a thing is made of metal or skin. this allows for relatively processor cheap refinement on the model and material information that helps sell the overall look.

 I took the time to paint her necklaces and buttons with a metallic texture. I also used a Normal Map to add different sculptural textures to the clothing, such as a grainy texture on the fur that sells the illusion much better.

Another useful tool within Substance Painter is the Projection tool, which I used to project photographic data from the shoot onto the base colour layer. 

this was most useful in keeping the likeness consistent, often fixing the model with Blender's sculpt tool can ruin the likeness, so I had both software running and bounced between them. The following shows the final output, the first is a Blender's Cycles render, the second is a real-time interactive view courtesy of SketchFab.


On reflection it's hard to say how much of the final result is truly and purely photogrammetry at this point, however, it makes for a great foundation to create studies such as these. 

I would approach photogrammetry more loosely in the future, it's more about the final product than how it looks starting out. there are messed-up examples in my failures, that upon reflection may have been ok starting points had I known how much manipulation can be done after the fact. 

My final note is how I have been thinking about this in terms of practicality. I've questioned the how quite in-depth and I think some thought must be put into the why.
A definite avenue of exploration is this Smithsonian collection a coursemate found and put in the group chat.
https://3d.si.edu/cc0?utm_source=siedu&utm_medium=referral&utm_campaign=oa

I wasn't sure if the models were photogrammetry or not. I dug into the blogs of the Smithsonian Digitization Process Office and found out this post confirming the use of photogrammetry in the scanning of a Bell X-1 Aeroplane:

https://dpo.si.edu/blog/bell-x-1-3d

"We used laser scanners for geometry capture and Photogrammetry to capture the color information of the Bell X-1. With Photogrammetry, we are able to turn our digital cameras into 3D scanners using post-processing software. The laser scanners capture over 1 million data points per second. The data we collected on the Bell X-1 is accurate to about one millimeter. " - Vincent Rossi

Courtesty of https://dpo.si.edu/blog/

This proves to me that the techniques I have been exploring have legs in the process of archiving museum collections, and the idea of combining this with laser scanning is something I'd really like to explore. 

This practitioner also details some of the issues he overcame later in the post

"Certain materials on the Bell X-1 did not scan well and presented challenges for the scanning team. We had difficulty with two types of surfaces on the aircraftthe glass windshield and the painted blue areas around the stars on the wings. Glass does not scan well because the laser mostly passes right through it. Luckily, we were able to get enough points on the glass surface to accurately reconstruct the windshield using CAD (Computer-Aided Design) software. " - Vincent Rossi

This second quote reminds me of a lot of the problems I overcame in my own tests, and how oftentimes the resulting mesh must be fixed manually. It shows that no matter how fancy your setup is, it's always going to need some work afterwards.

References:
https://sketchfab.com/

 https://www.substance3d.com/

https://adamspring.co.uk/single-post/2017/08/30/Single-Camera-Head-Scanning-Photogrammetry

https://blog.twindom.com/blog/overview-dslr-photogrammetry-3d-full-body-scanner

https://3d.si.edu/cc0?utm_source=siedu&utm_medium=referral&utm_campaign=oa

LJMU Research 4: A more in-depth look at photogrammetry

General / 06 January 2021





I briefly touched on photogrammetry in a previous post. I find this method to be particularly interesting as it feels like the possibilities are endless. The whole world is basically an open-source asset library, in theory, anything can be digitized and used however you please. The method involves photographing a subject from many angles, running it through some software, and getting a fully realized 3D model out of the other side.


My first contact with this technique was using a now-defunct app called display.land. Essentially what this service allowed you to do was create a photogrammetry asset through point clouds generating using video, meaning you didn't have to take several photos and you received live feedback as to what points have been captured successfully.

Display.land

With this service, (I call it a service because the actual software used to be on a server and not existent within the app) I managed t create a very low-quality 3D model of myself. the following media shows the results of these tests.


https://twitter.com/GhostBrush_/status/1276619387420782599

https://twitter.com/GhostBrush_/status/1276586259864002560

I previously touched on the statue base asset I created using photogrammetry (below).



It came to my attention that I didn't really document my workflow on how i came to this conclusion so I set out to create a new one.

It can be hard to pick a good option as some places may greet you with odd looks from strangers wondering why on earth you are walking in circles and photographing the same subject, other locations are simply inaccessible to get the correct kind of shot. 

Fortunately, the streets are much less populated due to ahem...
...Current world events...
...ahem...
...So the odd looks are not so much of a problem.

Recently on one of my walks around the city of Liverpool blessed me with this cheeky and completely harmless piece of petty vandalism. (below)
Before I get in trouble I must stress that whoever this rogue statue masker is, it isn't me. 


Now in my research, I couldn't find much about Henry Cotton, the man in the statue. I know he was the first Chancellor of the University I attend, and it's hard to say how he would have felt about this. I like to think a man of education would support the use of facemasks in our current time so all I can do is hope he would approve.

 here is a picture of him without a mask courtesy of https://artuk.org/
To capture this statue in a 3D sense the first thing you need is a camera and the patience to walk in concentric circles for a bit.

Above shows roughly what you're aiming for, many of these images turned out unusable, thus is the struggle of a photogrammeter. Once you have a nice array of images from all different angles it's time to plug it into your software of choice. I'm using 3DFZephyr for this, however, you can use whatever you like.



First things first you need to generate a Sparse Point Cloud. this is just a loose scattering of duplicate anchor points. Basically, the software finds similarities between the photographs and pins them for further detail extrapolation later.


this is actually a much cleaner version than what initially generated, much of the surrounding architecture was captured so I gave them the snip and moved on to the next stage. To achieve, this stage is actually one button press and a lot of waiting and looking at stuff like this:

Riveting stuff indeed.

However, this stage is actually doing a couple of stages at once without you needing to see. The first pass is the Dense Point Cloud:

This is basically the process of subdividing the Sparse Cloud and coming to accurate estimations of volume based on the source images. this is basically coloured dots or Voxels and no geometry is yet present. that's where the meshing begins.

And now we have Geometry! Granted at this point its geometry only a mother could love, but geometry nevertheless. you may start to notice certain hiccups or errors, but this is not the time to worry about that, what is important is that we created fairly accurate volumetric data and turned it into data that a game engine might be able to understand. The next step makes that a reality. we just need to look at a little more of this...

...While the software creates a fully textured mesh from the initial mesh

After giving the grass a little trim I've now got a fully textured mesh ready to fix in blender. there's a gleaming issue with the cap that needs sorting out, often you find this sort of things when the subject is taller than you are.

with some cleanup, you have a final model ready to show off to your friends and use however you please! here is a render and an interactive model viewer of the final result.

If you look at the top of the hat it is clear that some repair work has been done, however, I think it is important to ensure these repairs are done in order to sell the idea of the physical counterpart. I've noticed in both my large scale photoscan attempts, that the top is always the part that needs the most work. I suppose a real pro in this field would have some sort of crane or lift to make sure every angle is captured, however, I'm doing this guerilla-style so I don't mind a little cleanup here and there.

Also upon uploading this piece I noticed that I am not the first to use this statue as a photoscan here are examples of other people's attempts.
https://sketchfab.com/3d-models/final-henry-cotton-statue-high-64964d9e69bf4b8fbbc6043c9dc96d60
https://sketchfab.com/3d-models/henry-cotton-statue-af948b38ae924425945b0418c72036bd
These are both really cool and capture the detail in different ways. I suspect the kind of camera used and views selected have an effect as well as differences in weather etc. so there are many avenues to explore in what makes a good scan.

 note to self, I must look up other attempts before I do my own, rather than after,  as this may have shown me what to watch out for and what works well.

I'm going to end this post with a picture of the statue from better days. here's to hoping we'll see him like this again soon.


References

https://artuk.org/discover/artworks/henry-e-cotton-esq-first-chancellor-of-liverpool-john-moores-university-1992-65543https://en.wikipedia.org/wiki/Henry_Egerton_Cotton
https://sketchfab.com/

https://www.3dflow.net/3df-zephyr-photogrammetry-software/

LJMU Immersive Research 3: AI is stealing my job edition

General / 04 January 2021



Artificial intelligence is something that both fascinates and terrifies me. While AI passing the Turing test is probably a long way off yet, the fact that AI is infiltrating our workplaces with alarming frequency is no longer science fiction. In the previous decade, the idea that an AI may be able to create appealing artwork probably would have been met by me with outrage, anxiety, and apprehension, yet now it seems to be more real than ever.
If you don't believe me here is a website dedicated to selling unique artworks generated by AI https://www.artaigallery.com/

Ever the optimist, I choose to see this as more of a "can't beat em join em" kind of situation. In this research, I'll aim to understand how AI can help better us as artists rather than replace us entirely.  

Awaiting human input, digital painting by myself.



First I want to be very careful in my definition of AI, as it has become internet shorthand for a lot of things. What I am specifically talking about is anything that uses neural network machine learning.



 I see the ability to learn from past successes and failures in some way to overcome a current task as the most valuable foundation of our human intelligence. It's the basis of most other forms of intelligence; whether that be emotional intelligence, academic intelligence, and so on. To have grown intelligence in a field, at some point, you did something well. You then took the endorphins and used them to connect the neurons associated with whatever you learned with a positive experience, associating that pathway with a positive outcome.  Likewise, at some point, you must have messed up. In this case, the neurons associated with this memory are negative and you feel discouraged to go down this pathway again.


Any neuroscientists in the house may be headbutting their monitors in disgust at this explanation, so bear in mind I'm an artist, not a scientist, however, this simplified concept does appear to be the core basis for AI machine learning.

while a computer cant "feel" the positivity or negativity associated with any certain act, it can be taught to index pathways as positive or negative. a good outcome may be simply numerical, for example, distance traveled in an obstacle course or something more complex and sinister, like a person's likeness. the most successful solutions are often bred with other successful solutions, and at other times total mutational randomness in order to slowly evolve to the best possible solution to the given task.



(Above) Google's Deepmind AI learning how to get a puppet to walk without ever seeing a walk cycle

(Above) Comedian Jordon Peele in association with BuzzFeed demonstrating how machine learning might be used to a more sinister end.  This is where I believe that AI and machine learning pose a more-than-moderate risk to our ideas of identity and privacy. I believe that in coming years a lot more regulation will be associated with so-called "DeepFakes"


ARTBREEDER




https://artbreeder.com/  is one of those services that has you yelling "how is this free????" into the aether. essentially through machine learning, you can "breed" artwork with other artwork to create new artwork. it seemingly cuts the process of actually painting a character or landscape into a process of unnatural selection. you see in this case the process that selects whether an outcome is good or bad is in control of the user, even allowing for individual "genes" to be altered before committing and "breeding" the next generation. 

(Above) this cool foreboding fortress is 100% unique to me based on minute decisions made over several generations. The images below show this artwork's "ancestors".

This is the earliest ancestor, an artwork made by myself in Cinema4D

This is the AI's interpretation of my piece, a lot of information is lost due to the chaos of the process however the water and architecture placement remains the same throughout the process


this is where the piece became the most chaotic, however, you can see the beginnings of the focal tower.


This is where the piece transforms into its current composition. the final stage was more to do with style.

 While it is fair to say that in its current chaotic state it certainly won't compete with the most talented artists out there, I think it's a fantastic companion to quickly generate early concepts or visualizations. I also see there is a real time-saving element to this and future versions of this. while it is very difficult to control it to the point where you are designing what is in your head, I think the chaos really brings a lot of "happy accidents" to the mix.

What i find quite odd is the portrait generator is actually much easier to control, perhaps due to how facial landmarks are much more consistent across the board than geographical landmarks. I was briefly reminded of https://thispersondoesnotexist.com/, a frighteningly convincing image of a human being generated with a press of the refresh button.


the following video is an experiment I conducted in artbreeder involving the same face with the same genes but from different angles, and gives an idea of how well the AI grasps the concept of a likeness. The idea here was it may help artists understand what their characters need to look like from certain angles.   


this character was also generated with many generations of... grandparents i suppose? with minor changes each time. 

It's quite hard not to see this as a little unsettling, like breeding the perfect human. There are even sliders for racial percentage which opens more questions than I care to get into on this post.
 Nevertheless, It is an interesting tool I could see being used. In fact, Travis Davids used it for his visualization of real-life south park characters here:
https://www.artstation.com/artwork/D5zRxn
In his description, he also outlines many other AI techniques used in his project, so it is at least a testament to how AI may help us as artists visualize to a much higher level.
Personally, I feel that a computer's grasp of creativity is chaotic and random, and our input as curators is what guides them to the path we select. I will conduct more investigations to see how this can be explored further

References:
https://www.artaigallery.com/
https://www.artstation.com/artwork/D5zRxn
https://youtu.be/gn4nRCC9TwQ
  https://artbreeder.com/ 
https://youtu.be/cQ54GDm1eL0

LJMU Immersive Research 2

Article / 15 November 2020


It has been some time since my first posting on this Masters, and in that time we have learned a lot about what it means to be a working artist within the field of immersive. Some of the main topics we have covered have ranged from technical skills such as projection mapping, 360 videos, and Unreal engine to more nebulous ideas such as the art of storytelling itself.

I like to keep my options open and explore every avenue so this will be a more broad net of what I've been looking at. I want to focus on something specific later on.


With the tools and techniques presented to me, I have been performing small experiments to get to grips with the work.
I will start with my research and show experiments towards the end. 


1 Research

this section will outline my main points of research including video links and my general opinion in terms of their application within the field. 

1.1 Projection Mapping

2019 ST Raphael - Projection map- Mathieu Martin 

Projection appeals to me as it holds great power for relatively old technology. It can be completely transformative to spaces, architecture, and objects.

I have a couple of main points of reference when it comes to projecting images, first is this amazing interactive piece that calculates the height of the sand in a sandpit and projects map topology accordingly. The illusion holds up really well. I like the sand being a medium between the digital and the physical.

Similarly, this topological projection was achieved by another group. This time, however, the end was not educational, but retail-focused. It hones in on the idea of "retail therapy" creating an immersive experience from the rather mundane task of purchasing footwear. It brings to mind a future where your retail outlet becomes something more like a gaming experience.

I like these two videos specifically as they show how a simple idea such as topographical lines may turn something physical into an interactive experience. 

1.2 VR/360 Video


VR Station - Digital Painting - Simon Stålenhag
360 and VR both face similar challenges in the direction of a viewer's attention. VR thankfully has years of game theory to fall back on, 360-degree 3D environments are no new concept to the world of game design. This too can link back to 360 videos, the direction of a viewer's gaze is highly important. a great article about how to do this can be found here https://uploadvr.com/vr-film-tips-guiding-attention/

For me 360 VR is certainly something I would like to see more of in the future, however, I'd like to avoid them for this project. my biggest concern with it is hygiene during the most hygiene conscious time of recent history. I simply don't think a headset is a viable solution to an art installation. Even without the hygiene concerns, I would like to avoid an experience the viewer can have at home.

That being said, I do see a future in commercial VR products. Not only does it lean on pre-existing game design ideas, but in a way, it expands the user's living space virtually anywhere imaginable.



1.3 AR/MR


AR and MR are very exciting to me for an admittedly very childish reason. Namely that it's the closest thing to a hologram we have, and with new advancements in Machine sight, it's more accessible than it's ever been. my primary experiments in this field have involved SparkAR as well as Adobe's new product by the name of Adobe Aero.
I watched some talks at the adobe max conference involving Aero and was frankly blown away by how easy the whole thing was. just assemble your scene and triggers similar to any other 3D software (even simpler in some ways) and save it out and boom, it's on your phone blending with reality. 
More recently I caught a SparkAR talk by the artist collective Keiken named World-building and merging the physical and digital. the talk was very new-age, trippy, challenging of gender roles, and overall something you'd expect to see on Adult Swim at 2 AM. jolly good stuff. they did something that kind of spoke to me.
 They posed the idea of using face filters and MR as a means of world-building. The more you think about it, our digital personas are a world-building exercise, if not world-building then at the very least world-augmenting.
We use filters, selective opinion voicing, selective angles, lighting, moments, and overall decision-making to project our most ideal selves outwards. With the advancement of face filters getting to the point where we can physically sculpt ourselves to any ideal shape imaginable, it's not hard to see a trend of our real-world selves growing ever further from our virtual counterparts. What's interesting about Keiken is that they have taken this and almost turned it into an intentional expressive piece of reality distortion to the point where it becomes fantasy. 
Simple sparkAR example 


With these various talks I have been watching, the general idea I'm getting is that companies are most excited about AR and the possible ways it may enhance our experience in the world. VR still has a ways to go and Projection almost feels low tech in comparison to what you can do with a virtual lens. It not only has the potential to completely alter our view of the reality we physically occupy, but it has all sorts of applications in education, entertainment, fashion, gaming and so on. 


1.4 Web



I will briefly touch on the web as it is a sadly underappreciated facet of the world of XR. What benefits the web the most is accessibility. Most people don't want to install an app, or oftentimes even go anywhere to access the content. The web has you covered, and with CSS and JavaScript being probably some of the easiest coding languages out there, it's amazing as a tool for artistic expression. net.art and ARGs (fake conspiracies made up to send players down online rabbit holes, often just used as marketing for an external product) have been prevalent for many years and new developments in 3D web tools have only enhanced this growth. while I could name many such experiences, I think my favorite of recent memory has been this accidental game created from a virtual tour of a house.
8800 Blue Lick Road, from what I have found with proxies to American news sites, was once home to a mafia fencing racket. Boxes were stolen, presumably from trucks and warehouses, and stored here. The owners were incarcerated for their actions, and their home was put on the market without removing the stolen goods. What's even more interesting is that they sent someone in with a 3D camera to take a tour of this labyrinthine property. Apparently, it was an old Baptist church so some oddities in the layout are present, such as a strange walk-in shower room, presumably used for baptism.

The experience, in the end, is a truly unique and unintentionally fascinating narrative of how the criminal underworld live.




2 Experiments


This is a collection of my various experiments in XR including 360 videos, photogrammetry, AR, and rear projection. My trademark wide range of unfocused experiments hopefully will help out with future ideas. I think of all the subjects I have covered, the augmented reality side shouts to me the most so hopefully more experiments within this can come out in the future.

References
https://youtu.be/bA4uvkAStPc
https://youtu.be/07hiEtggHXw
https://uploadvr.com/vr-film-tips-guiding-attention/
https://youtu.be/1F83n8W2JUg
https://youtu.be/K_-zNcTjZh0
https://youtu.be/0DZ0wBjFKg4
https://3d-marketing.captur3d.io/view/keller-williams-louisville-east/8800-blue-lick-rd

LJMU Immersive Arts Research Post 1

Article / 29 September 2020



Today marks the end of the first week of lectures in Immersive arts. we have been instructed to start a research blog to document our journey through what we find interesting within the field, hopefully leading to a final project that ties into the research.



I'll start by laying out what in my understanding of what constitutes an Immersive work of art. To me, an immersive artwork would be one that the viewer feels a part of, a direct influencer on the outcome or journey witnessed within the work. To this end I would categorize most forms of play as an immersive experience, however, the "art" would be in the medium.

 I wouldn't necceserally consider a game of football as an immersive artwork, however, a game of FIFA 2020 might be. In the opposite direction, I wouldn't consider the dreadful 2000 Film "Dungeons and Dragons" to be an immersive artwork, however, the tabletop counterpart, involving characters created by the players and lead on a story through the world of Dungeons and Dragons by a host to indeed be an immersive art. It involves an often improvised narrative given a sense of immersion through the player's imagination and our joint desire to tell stories. No two games are alike, It's totally dependant on every person in the room and their unique goals, senses of humor, and overall personalities.




But art exists outside some dude's basement. the dungeon master can't be present for every patron's visit to the Tate, so what about works that are always accessible yet provide the same sense of taking part? 

Allow me to introduce AI Dungeon; a game that uses artificial intelligence to create a narrative similar to how a human Dungeon Master would. Presentation-wise, it's basically a text-based adventure game, however, instead of being hit with an "I don't understand that" when presented with unfamiliar verbiage, the system tries its best to adapt to whatever is said. It also doesn't have a predefined structure, instead, it just tries to learn and alter the course of the story based on its database. 


Naturally with the nature of AI, there are some certain discrepancies with the logic, like how the diplomats get away from the player character in the carriage that he is hiding in. This doesn't really ruin the overall experience, however, something about a world in which anything you can imagine can happen is so appealing to us as humans that the dodgy AI and the text-based limitations kind of melt away into almost an imagination assistant.


Another important facet of Immersive art and storytelling is the technological group known as XR. this incorporates VR MR and AR, technologies that in some way try to fool our perceptions of reality to incorporate some digital element. The viewer is immersed in the art as their mind perceives the digital to be real. This blending of the digital world and the real world is becoming more and more prevalent and is most commonly found in apps on our phones, VR headsets in our games, and displays in our everyday life.


During my research before this course, I found out about a technique known as Pepper's Ghost. To explain it simply, shadows do not reflect as they are not the result of a ray, they are the absence of light rays. Light does reflect and bounces around freely, therefore a "hologram" display can be made with a projector and a plane of glass or reflective transparent plastic.


This is what we saw at the 2012 Coachella performance featuring Tupac and Snoop Dogg on the same stage 16 years after Tupac's death. 



I found this so interesting that I ordered a cheap little plastic pyramid you can attach to your phone and created this in after effects using 4 photos of an anatomically correct skull for medical students I just happen to own for totally non-creepy reasons


A recent example of this sort of fake hologram that I can think of occurred when I left Leicester and me and my friends spent my last day there at the National Space Center. Of all the crazy interesting stuff going on there, there were some really cool hologram displays that caught my eye. they can be seen here. I found this fascinating and whats worse is I have no idea how this works. I couldn't find a projector anywhere and I spent around 10 minutes looking. What's so cool about it to me is how it brings home the space age while actually functionally depicting how these tools work with the animations.

One thing I have been thinking about doing as a final project is something to do with a "smart mirror", basically these mirrors people build from special one way mirrored materials that have a built-in display hooked up to a raspberry PI. The idea is the first thing you see in the morning when you brush your teeth is now also a display for all sorts of useful information. I think it would be cool to combine this sort of technology with an XBOX Kinect to create these virtual 3D spaces within a mirror. It could be a great format for a story about the modern age of digital vanity.

Hopefully, this gives a solid idea of what I know and find interesting, I hope for this course to teach me the necessary tools to get really creative within this field.

References
https://aidungeon.io/play-ai-dungeon/

https://en.wikipedia.org/wiki/Pepper%27s_ghost
https://www.amazon.co.uk/AOWA-Projector-Universal-Smartphone-Accessories/dp/B07RKNZ9BJ/ref=sr_1_2_sspa?dchild=1&keywords=plastic+pyramidhologram&qid=1601401460&sr=8-2-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzOVNEOFpCREswN1JIJmVuY3J5cHRlZElkPUExMDQ0NTMwMjNINTI3NFVYUkdJMiZlbmNyeXB0ZWRBZElkPUEwNDQzNzEzMVBNNFBUUlkzT0RKTSZ3aWRnZXROYW1lPXNwX210ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=
https://www.youtube.com/watch?v=RWjvJq4Zabk
https://www.youtube.com/watch?v=GYXTUf6UBUo

some more cool stuff
https://twitter.com/RenatoMunari/status/1288197181229486081
https://twitter.com/duck/status/1291310189401059328
https://twitter.com/rich_lord/status/1291061786712580096
https://twitter.com/larrykim/status/1286414031721394177
https://www.youtube.com/watch?v=oCwE5ayHgjM