Wednesday, May 8, 2019

Submerged Topology T4T LAB 2019




























T4T LAB 2019 Texas A&M University. Invited Professor: Joris Putteneers.
Team: Ozzy Veliz, Hannah Terry, Evelyn Ringhofer, Hunter Sliva, Jasper Gregory

Submerged Topology

In Quentin Meillasoux essay “After finitude: An Essay on the Necessity of Contingency” he introduces the notion of ancestrality upon objects and its alienation to human comprehension. In his essay he examines the concept of this ancestrality of existing in a non-anthropocentric world. Submerged topology exists within the “what-if” a pure narrative speculating of how objects may exist, have existed and their relation to its counterpart: artificial intelligence long distanced from its human trace.

The object in this project exists in a post-anthropocentric world where human have ceased to exist, and the next generation has led to development and evolution of AI. In its quest for knowledge and exploration the AI has traveled to remote areas where no information and data has been extracted, thus an AI in its quest for self learning. Upon reaching the bottom of the ocean, an area no man had ever explored the AI is able to acquire new visual data and add content to its library. This place is where the object exists and has existed, no prior information of an object of this scale leads the AI to assume that humans inability to find it has made it thrive and grow without human corruption throughout millennia, as far as the AI knows its point of origin in time is inaccessible but assumed to be millennia old. As Meillasoux would have place it, the object becomes a supra-ancestral object and thus have to be analyzed in universal terms not correlating to man-made ontology. In this case its textural qualities are represented through the machines first encounter with the object.

In a way to understand how the supra-ancestral object came to be, in its form, and its growth, the AI analyzes the figure in relation to its surroundings in this case the benthic layer of the ocean. In this environment, forms such as hydrothermal vents and the object itself are affected by the abundance minerals and smoke seeping from the fissures of the oceanic crust.

Therefore the AI interprets the object as itself through an analysis of possible growth algorithms due to the influencing factor of its harsh environment. Encoding the object itself is a series of representation separated from what the normal eye can see, in this type of representation “Machine Vision” the AI can take note and account for pressure , mineral deposits and laminate the image of the object in both infra-red, solid continuity, sectional analysis, and color spectrum. This is a very typical rendering and viewing technique of how we see depth passes and machines are able to laminate a single image and overlay passes of different qualities upon it. In the case of the elevation, the light is able to uncover the many different passes the AI sees upon examining the object and thus reveal the section and its air pockets. As the AI reflects upon the objects origin characteristics such as the objects layers and its thickness are observed to be direct responses to the environment. Through these visual observations of the environment and the newly discovered object there comes a better understanding of the objects ontology and therefore opens up new possibilities of data & discovery.

Much like the image layering of the elevation/section, the sound heard in the short film is an analysis of the machines collection of data through multiple spectrums. Audio cracking and disparate snaps represent the AI viewer interpreting or analyzing pieces of the object, much like lidar sensors in autonomous cars view its surroundings through laser location, and 3d scans objects on real time. In this case different layers of lamination and complexity are manifested in varied layers of audition. Reads with sound (slow pulses of LF noise) and results a crackling sound after analysis. The spotlights in the video and drawings allow us, the human viewers, to see - but it's really only a sliver of the what the machine views. AI would see in wavelengths of light invisible to human perception and in other methods outside of the human intelligence. This audio/visual data accumulation by the exploring AI is its attempt to gain an understanding of an object that is similar.