Tuesday, May 31, 2016

T4T LAB 2016. Object Redux. Piranesian Object

















T4T LAB Spring 2016. Object Redux. 
Invited Professor
Adam Fure
Team: Matt West, Sophia Kountakis, Erin Biedeger, Reuben Posada, Brooks Van Essen

Our project is a Piranesian object developed with Mannerist operations like disjunction, striation and overlap, and generated by the Stamp.
The team began with an interest in three canonical churches (Il Redentore, San Giorgio Maggiore, and Rainaldi’s Santa Maria in Campitelli) and observed how these ideas of insertion, intersection, and interaction of spaces in plan revealed or concealed themselves. This led to their dismantling and compilation into a historical data set of objects, and a reconstruction of a new plan-object that blurs the line between figure and ground.

The items in Piranesi’s Campo Marzio are classified as either Autonomous Infill or Architectural Infill. But we see the plan-object as being both, as each part within the model was placed there intentionally just as Architectural infill is, yet at times these elements combine with others to generate secondary forms with a scalar hierarchy, or are subservient to another such piece in the vicinity. Just like the churches we studied, placed these objects offset from each other to create the same overlapping of spaces that we are interested in. In this way the plan-object creates its own ontology as it has no anthropocentric quality or use that can be perceived by a viewer.

This brings us to the notion of the Stamp. In our project it takes the form of Boolean operations of varying severity acting on extrusions of our plan. Stamps are exhibited as shallow indentations resembling a bas relief, and deep gouges that are more similar to a die punch. It was these deep stamps that generated some of the strange intersections that have become such a central idea in our project. As two parts intersect, the resultant geometry carries a resemblance to the original part, yet at the moment of the intersection, that legibility is somewhat lost. This creates the blurring of the stamp that reinforces the object’s autonomy.

The plan-object became the epicenter of our object, first as the stamp operating upon a cube, then as the interior to a more subtle object with similar ideas of stamping (in the bas relief), and then a skeleton on which new sub-objects and elements of that plan can be placed (or stamped) again and again, duo-mining the precedent materiel and creating a labyrinthine object which calls to mind Piranesi’s Carceri. The plan-object, with its offsets and intersections with the cube, or with other iterations of itself, generates strange spaces as well as new poche that echoes source objects. So the exterior of the object that we see is the reverberation of the interactions of the object at its innermost. Yet the objects interactions and its results do not necessarily inform the overall nature of the object. As Jonah Rowen states “Theseus may be able to use the trick of the thread to find his way around the labyrinth, but the omniscient view is reserved for the architect.”

The idea of the Labyrinthine Object is that spaces are created by the Stamp that cannot immediately be identified, and there is no real way to perceive the object’s interior and its progression. Yve-Alain Bois writes that “The elevation cannot provide the plan [and vice versa], for as one walks around it, one finds no element that has maintained a relation of identity with the others.” Like the Carceri engravings, the object is composed of recognizable canonical architectural elements, yet the composition itself is strange in terms of its mereology and organization. The stamping in from the exterior generates unfamiliar relationships between familiar objects. In this way the Stamp acts as a force estranging the interior of the object from the plan-objects that created it. 

The surface articulation itself is the result of our study into ideas of positive and negative stamping - that is stamping in from the outside and out from the inside. As well as the notion of the field, and investigations into the formal stamp vs the pixel stamp. Where the formal stamp is the actual moves and modifications we’ve made to the object’s geometry, and the pixel stamp is the post-production process we used to create the field of stamps, or Stamps in the Extended field. Meaning that the stamps function on all levels of the object; bilaterally but asymmetrically. In this way an infinite field of stamps is created at random. Despite its repetition it maintains uniqueness and difference. This field of stamps becomes, in part, the new object; the coalescence of the surface conditions of its predecessors, draped over a form generated by the plan-object.

As we moved forward into studying the color and material logic of our object, we looked to the sort of strange brutalist nature of the form, heavy orthogonal elements articulating new corners and new scalar relationships. So in order for the material brutality to follow the formal brutality, we’ve imagined our object as having this sort of lithic quality of striated, colored marble, that a form which is such a synthesis of its own precedent necessitates a material that is similarly unlikely.

Our object is a thing which is totally removed from its source data, yet has a similar nature; a series of objects overlapped to create a new space. Operations of the Stamp act upon surfaces to create new conditions, on that surface and beneath it.  And a space that is thoroughly labyrinthine, a strange extension of the disjunction of the source objects, where only a viewer of each plan and section could even hope to grasp it’s whole complexity. This hermitry of the object affirms its Architecture and its Autonomy while denying the metaphysics of presence. 








Sunday, May 29, 2016

T4T LAB 2016. Object Redux. Object Obscuration













Model Photos











T4T LAB Spring 2016. Object Redux. 
Invited Professor
Adam Fure
Team: Justin Zumel, Elin Verhoeven, Tung Dinh, Maria Fuentes

Within the realm of representation, a resolution occurs during the transition between mediums: creating the possibility of emulating new objects.

In our process of photogrammetry, we rediscovered the idea of flattening.  Through this process of using the camera as a marker between physical to digital, there is a loss of information that creates formal and textural discrepancies. For the methodology, reproducing these objects in the digital form can be controlled through the amount of photos used.  For example, less photos equate more data loss and tears within the mesh, while more photos create higher dense polymesh digitally. 

The categories of objects in our mereology consisted of: physical natural rock, Physical synthetic rock, and rock-like object. Through that process, these object’s unequivocal qualities dissolved into the same field to a point of obscured qualities between the three categories.  Another production from this method was an image that represents the object through texture; which creates a flattened representation of the object as if the object were a flat image. 

Through this exploration of controlled resolution of meshes, we then created an object digitally that would be added to this array of meshcount.   We noticed that if we widened this spectrum of density from high poly to low poly meshes, then the low poly meshes will turn into flat planes or faces.  Through these flat planes, they do not represent the rocks, but the idea of flattening.  Again, this idea of flattening is not only a production of the photogrammetry, but also a production between the pure digital, pure physical, and translation between the physical to digital. 

These planes and objects were then mapped with textures that did not emanate from the objects themselves, but other objects.  For example, the texture map that was created for one object was mapped onto another object to form another degree of obscuration; the delineation from what the object was to what the object manifested itself through digital manipulations.  To exhaust the idea of obscuration even further, our collective of textures did not only come from the photogrammetry, but also photographed textures of found objects.  Through the post processing phase, there was another level of obscuration that manipulates the way in which a rendering software actualizes a digital texture, and how photoshop renders that same texture.  This degree of obscuration is executed by the creation of certain moments of photoshopped texture filters within the same image of the raw render. 

In this project there are two autonomous forms that create the illusion of a dichotomy rather than a visual articulation of cohesion.  In a trivial notion, this illusion represents these flat planes as a form of disruption through the splitting of these objects, but what these planes are argued to do, is create a plane in which the objects have a relationship with the ground.  In a way, these planes create a notion of flattened ground where all of these objects have a direct relationship with their planes.  This contains the idea of figure and ground and that all of the objects are on the same field of different degrees of flattened representations. 
As a whole, this project is a question of representation and translation that… are these objects a representation of the initial object?... or are these objects a creation of new objects that do not rely on the idea of representation; because through the process of photogrammetry, they do produce a representation of what was scanned initially, but because of the control of the input of images processed to purposefully produce tears, these objects become new objects.  As a result, the idea of representation does not exist.  

Thursday, May 19, 2016

T4T LAB 2016. Object Redux. Spracestral Objects






























T4T LAB Spring 2016. Object Redux. 
Invited Professor
Adam Fure
Team: Braden Scott, Ben Schoenekase, Lydia Pifer, Logan Whitley

.obj is a post-digital architectural project that deals with the creation of form through the direct editing of code.
Throughout the semester, we have had the opportunity to discuss with Adam Fure and study how form and space can be created in unconventional ways though the manipulation of objects. While discussing a previous project with Adam, he made a comment that was the initial spark in a chain reaction of ideas that would lead to what you see before us. "Architectural precedents no longer serve as purely referential, but rather they serve as a database that present day designers can pull from."  Architects often looks towards successful precedents to begin their design. But, due to the rise of computational architecture, we can now pull parts from several different designs, and assemble them to create a new whole. The database idea really intrigued us and when we sat down as a team, the idea finally hit us. We took an .obj file and, instead of opening it in Maya or Rhino, we opened it in a simple text editor. What we found was a massive database of numbers which relate to 4 things, vertex texture information, vertex normal information and, what would turn out to be most important to our process, vertex coordinates and face data.
This data was never meant to be seen by us, the designers. As it stands by itself, it is meaningless, but once paired with any major modeling software, the computer can translate it into a legible form. We now had the power to manipulate form, without ever looking at it.
We were super excited to discover this, and we were ready to begin designing blind. To really be able to gain an understanding of the process, we wanted to start with the most basic architectural precedents, primitive geometry.
Because this newly discovered process is so expansive, we decided to limit ourselves in initial manipulation. We took vertex coordinates from primitive geometries such as the pyramid and crossed them over with another such as the prism. What was created was a very interesting form that happened to be very architectural. We did this with every primitive to create a second generation of post digital objects. We then crossed those to create a third, and those to create and fourth and finally one more time to create a fifth. Through this ancestral process, 256 objects were created, each with a very unique form. In this box you will find a select few that relate to our larger prints. As you explore them you'll find some simple objects and some with seemingly microscopic details.
The inclusion of software as an architectural medium has permeated our profession as a constant; therein, the understanding of the software as a whole and how the computer interprets this data as a field of simply integers is vital to the new, post-digital architect. Computer modeling software like Rhino and Maya, are rooted in their initial platform of the least complex form, or the polygon primitive. By restoring the geometry to its most primordial state prior to manipulation, the vertex coordinates, face data, and functional normals can be edited to yield variable results.
Our process and series of manipulations denies the inclusion of preference and personality by withdrawing human choice. By combining and recombining object in a simple text editor, we as designers are blind to the outcome until it is translated by a software. A degree of control and authorship is relinquished as conventional considerations such as form, posture, and general aesthetics are disregarded, and the final formal manifestation of combined object code is assembled by a completely autonomous and non-subjective logic.
In endeavoring to explore these objects and investigate their primary qualities, it was first critical to understand our work in the context and scope of Meillassounian ancestrality. We view our objects as existing solely of primary qualities, displaced from human and subject oriented understanding. Our approach was to break open and digitally gut each new form. To scoop out the primary qualities, the raw data viscera, a substrate which can be extracted, converted, and averaged to produce a myriad of digitally born means of representation all of which are in fact still the object (where in the code remains the same), simply manifested and translated though different file types. All data can be reduced to binary; though this translation the object’s specific code based binary image, averaged hex code color, and even .wav files derived from raw data can be produced. Not only does this challenge the depth of what can be considered substance and the Esienmenian “essence” of architectural objects but more importantly it defines a post digital autopoietic system, by which objects can produce infinite manifestations of themselves translated across file types.
Although our objects are completely defined by primary and essentially derived qualities, and there is no subjective relation, we don’t consider these objects as operating as ancestral; our autopoietic process can only exist after the advent of computation. Rather, we have defined our objects as Supracestral, objects which have persisted beyond humanity and subjectivity, objects existing posterior to human cognition. We can only know the object as it is in-itself.

As stated before, these objects were developed without a sense of aesthetic qualities. When viewing the objects after the computer process was completed, we noticed several/most objects held certain architectural qualities which can be attributed to apopitenal. After recognizing these qualities, we put all the 256 objects into a field at varying scales and minimal distortion resembling Campo Marzio. The first iteration of the Campo Marzio was the CMYK color palette. We saw the translation from the RGB to CMYK as the first step of translating the objects from the digital world to the physical. The next versions were developed to create an illegibility as a massing of objects rather than individual pieces. A whole over the sum of the parts relationship.
Each object is seen as ontologically independent of all of the other objects despite containing similar pieces of code. The object is a primary production of the suspension of personality that would yield itself autonomous. Once the objects are then placed into a field, the secondary traits of the object appear that allows for architectural understanding and a discourse of the ontological field as a whole. This allows for games of scale, orientation, and density to be argued strictly through architectural terminology without the need for the primary formation and the ancestral reference of the object.
The object’s pinnacle existence is defined by this point of reliance on the interface to understand the binary vertex coordinates as not only text, but volume. Therein, the object can no longer ever be undefined for it exists, from here on out, in the digital world without the reliance of human interaction. The computer has defined the object as such and allowed a post-digital, post-humanist evaluation of the object through giving validity to the figure in a model while also carrying it far beyond humanity. At this point, the object can now be defined as SUPRACESTRAL.
The supracestrality of the object allows for a digital realm to be accessed and explained in the timeline of human cognition. The supracestral object of the field is evaluated through the autonomy of the individual pieces which now allow a parts to whole understanding of the field.
The Object Oriented Ontology of the field now asks the viewer of its own importance for the weight of the image asks questions to the viewer as well. Does the field exist in the physical world? Can it? For it exists now only above and beyond the human in its ontological timelessness and disregard of humans.
The rendered post digital objects that can be seen in the field now hold scalar agency in the digital world that allows only for architectural relationism in the built environment of the code. The objects can exist at any level through architecture, digital, and physical, but despite their existence in differentiable realms, their ontology is set.