Video Sculpture

America In A Disposable - Unity

America In A Disposable

IMG_1733.jpg

During the summer of 2017 I embarked on a cross country of the United States, beginning in Vancouver, BC Canada and finishing back home in New York. Along the way, I documented my trip on multiple disposable cameras, the beginning of my journey with film photography.

I have always found it difficult to decide what I should do with the photos I have taken. Other than putting them on Instagram or my website, how could I utilize them?

With my final for my Video Sculpture class, I decided I wanted to build a piece that could display the photos that I took in an interactive and stimulating way. The AR camera detects an image target, in this case being a logo I designed to signify an area from along the trip where photos were taking. Once recognized, the photos from that respective area are displayed in Augmented Reality around the logo. 

Process

I started building the concept with the idea of using my images on a physical map, and then displaying information around the photos. After getting some advice and comments on it, I decided to change to a more streamlined version of using the locations as targets and displaying the photos from there. I built an entire route inside of Google MyMaps, from there downloading a KMZ file and converting that into a PDF, and then taking that into Illustrator. 

routemap small.png

My goal was to design the entire route onto a map of the USA, and then laser cut all that on a single piece of wood. This was simplified into only cutting the logos and drawing out the outline of the state.

I underestimated the amount of time the designing of the locations would take, as I was struggling to make the logos into usable AR camera activator targets for Vuforia. This cut into the amount of time I gave myself for adding more interactions into the AR aspect of the piece; like activating the photos by tapping on the targets within the app, and other information being displayed about the locations.

Thoughts

Ideally, the entire route I drove would be displayed as the user views the installation. I view this as the first version of this piece, as I consider incorporating an actual physical map, having the AR targets being the actual locations on the map. Other interactions could include activating the targets by touch on the AR device, information on the locations, and more.

AR Photo Identifier - Unity

AR Photo Identifier

This augmented reality camera identifies photos and displays content and information connected to that image.

Process

When learning about Unity as a tool to build augmented reality pieces, and using Vuforia Image Targets to display content, I thought of all the photos I have taken since I have gotten into photography. All the different places I have been, and the different cameras I used.

From there I decided to build an AR camera that could identify my photos and display specific information and is associated with it.

I chose a handful of my pictures that I wanted to have prints of and that worked well as image targets, with a high amount of features.

Then we learned how to use a 3D-Scanner to scan objects, and I decided to scan my camera, which I found to be amazingly difficult. The camera was so small with so many intricate gaps in space that made it not the easiest object to scan.

I wanted to add more interaction within the AR app, like a running browser that could pull data off of the internet, other photos from that location, and more features, but time and experience with Unity only allotted so much.

Thoughts

I was really happy with this project. More of a conceptual piece, I want to be able to take this concept and use it in multiple settings. I picture it being used in museums next to pieces of art or objects in place of museum labels. 

Image with Specific Targets to Identify in AR

Image with Specific Targets to Identify in AR

Godzilla Controller - Max

Godzilla Controller

The Godzilla Controller is an interactive video piece that utilizes user audio input to directly manipulate the content of the video.

Process

For this assignment, we used the program Max7. I had missed the class where we were introduced to the program and learned the basics of it, and felt a little behind when beginning the project. When playing around with the different tools and functions of Max, I found a lot of interest with the microphone, and using the microphone as a controller. I had been watching some of the old Toho Godzilla films then, and for some reason I was compelled to incorporate Godzilla into this assignment.

I felt a great amount of difficulty at first with coming up with a clear conceptual background for this project in particular; rather, I found a way to utilize the tools of max in a way that was enjoyable.

Whenever Godzilla attacks a city in one of those old films, the sound of the screaming bystanders is impossible to avoid, and I couldn't get this sound out of my head. I wanted to show two different views of Godzilla's destruction: one with Godzilla plowing through a city, with only the sounds of his path of chaos playing; and one with the sounds of both his destruction and the screams of the bystanders. The switching between the two audio sources would be changed by the user/viewer screaming as Godzilla annihilates the city. The idea of someone screaming at a screen as footage of a man in a lizard suit destroys a miniature city seems funny to me, and I felt that the user screaming could find it enjoyable too.

Understanding the different ways to achieve similar results in Max took a long time for me to recognize, as well as just basic stuff within the program. I had built patch after patch after path, trying to get all of the tools to work; never seeming to get it right. It was only when I showed my concept Gabe that he as able to show an incredibly simplified version of my idea that was working just as intended.

The code for this MAX project.

godzilla controller.png

Thoughts

It was only after finishing this assignment did conceptual views of the piece come to mind. I was partly feeling things toward our current political climate, possibly feeling guilt towards the fact of believing that we as citizens we need to get up and speak our mind, and once we do we will be able to hear everyone else as well. There is definitely more work to be done on the conceptual aspect of this piece.

On the actual use of this as an actual video controller with a microphone works generally well, although I would like to add a bit of a delay to the response of the audio on this specific piece.

Stick Man - Mad Mapper

Stick MaN

A projection piece, Stick Man shows a character on a seemingly endless quest, never able to get to the point he is heading. He keeps walking and walking and seems to find himself back in that same spot again.

Process

Original Story Path

Original Story Path

I worked on this project with my partner Mingna. Our first ideas spoke about using a video of a character walking around the confines of the projection. This turned into thinking of the character being on a sort of loop. We created an intricate story for the character, going through inside a castle and falling into moat where they are eaten by a crocodile, sent to a purgatory like place where they open a door restarting the loop. We realized that aspects of Mad Mapper kept us from creating this as it works on a loop, not a timeline, and  we would need sequencing to make that happen.  

From there we looked at creating a projection that was layered in 3D-space. Looking at different MC Escher pieces, we played with different ways of layering in the physical space to project different layers on to. Our first attempt at the projection onto layers was a flat wall that we layered strips of tape on top of. Throughout this I was teaching myself how to animate a stick figure in After Effects, my first time animating anything on a computer. Started with different sizes of our stick figure walking across the strips of tape, the shadows of the tape, and the wall itself on an endless loop, each on their own respective layers. We built a frame out of PVC piping to put the tape on top of, but once we installed that we both felt that it wasn’t exactly what we wanted. 

Mockup of Corner Tape Placement

Mockup of Corner Tape Placement

The next day we decided to project onto a corner of our classroom that had a wall in that corner, cutting it into a 45 degree angle. As soon as we saw it projected in that corner, we knew it was the right was to project that version of the piece. From there we started designing  shapes inside of the projection and making more animations. 

Thoughts

Once getting to the corner of the room, I feel in love with the project. At first I wasn’t in love with the tape concept, but I think it being on top of the flat wall was the issue. Once it was in that setting, and with the shapes and colors that Mingna chose, I couldn’t get over the piece. I was excited to make more and more animations to put in it.

Light Sculpture - Live Underneath

Live Underneath

1O5A9603-2.jpg

Live Underneath is an emotive light sculpture that reacts to stimulus. As a person approaches the piece, the sculpture’s light will pulse at a faster and faster rate.

Process

I worked on this project with my partner Chengchao. We began brainstorming on ideas, and the first idea that came to our minds was “What if there was something in the corner that reacted to you?” 

From there, we built on that, getting to a whole in the ground that reacted to you getting closer to it. Since we weren’t going to cut a whole in the ITP floor, we figured we would have to build something to deceive the viewer into thinking that they are looking at the floor.

early concept1.jpg
early concept2.JPG
Early concept3.JPG

We understood that we wanted to use Arduino to light up the sculpture and have it react to stimulus, so we needed the piece to be at least one or two inches tall. When designing the whole and crack that the light would spill out of, we realized that having smoke billow out of the piece would help to bring out the light as well as add an ambience to the piece. From there we found a smoke machine on Amazon, and decided to build the sculpture to fit the machine inside. This brought the heigh of the piece to about 6 inches.

We used the table saw to size up the sides of the sculpture, and laser cut the design.  

Chengchao wrote the code for the Arduino to have the sculpture react to someone approaching it by using an Ultra Sonic Proximity Sensor. The code is here. We used MadMapper to fit a photo of the exact spot that we were covering on the floor, working much better than I expected. Once we installed the sculpture for the class comments and critique, Gabe commented that we couldn’t use the smoke machine as it could cause the fire alarms to go off, and subsequently destroy all the electronics in that area. So for that demonstration we did not use the smoke machine.

Thoughts

My final thoughts on this project are of a piece of work that I am immensely proud of. It is my first project to come out of my graduate degree at ITP, and I couldn’t be happier with it at this current stage. If I was in a space that I could use the smoke machine, then I would happily keep it at this height. But if I was to totally remove the smoke machine then I would shorten the height of the sculpture considerably, nearly to the shortest height possible. 

1O5A9608-2.jpg