Early in Orality and Literacy Ong makes an interesting statement concerning computer languages, that amounts to a dismissal:
We are not here concerned with so-called computer 'languages', which resemble human languages (English, Sanskrit, Malayalam, Mandarin Chinese, Twi or Shoshone etc.) in some ways but are forever totally unlike human languages in that they do not grow out of the unconscious but directly out of consciousness. Computer language rules ('grammar') are stated first and thereafter used. The 'rules' of grammar in natural human languages are used first and can be abstracted from usage and stated explicitly in words only with difficulty and never completely. (p. 7)
Later in the section “Post-typography: electronics” he does make some pronouncements about the impact of electronics on orality and literacy. He suggests that sound reproduction technologies are transforming the print world by making it more informal, like ordinary conversation; that all composition and printing will eventually be done with the aid of electronic equipment; and finally, that the computer intensifies the spatialization and objectification of the word (language) by it its instantaneous manipulation of its presence. He continues with considerations of the nature of secondary orality, leading to a critical, ironic stance about how “we plan our happenings carefully to be sure that they are thoroughly spontaneous” (p. 134) and does not entertain the topic of computer languages again.
Ong's central reason for dismissing computer programming languages is that they never arise from the unconscious like a mother tongue. Yet when compared to how most late twentieth century Americans learned foreign languages (Latin, French, Spanish, German), they have much in common with other learned languages, especially for those who only acquire the ability to read and write them. Humans do not experience computer languages the way machines do when they 'speak' them natively in operating systems running programs on TCP/IP networks, but neither do Americans who can't hear or speak learned languages they can write.
Besides this similarity in acquisition and use, another pertinent reason why computer languages ought to fall within the scope of the study of texts and technology is that most texts today are generated from computer programs, at least in the sense of Aristotle's 'efficient cause'. With expert systems, games, and even malware that aims to seduce human beings into relationships predicated on the assumption that one is dealing with another conscious being, the 'formal' and 'final' causes also point to a machine rather than another person. Just consider why the Turing Test continues to be held valid conceptually.
Finally, Ong's position can be viewed as bias favoring human speech and writing when it is possible that new forms of consciousness may evolve as epiphenomena of machine activities. We have no experiential data concerning what such machine languages may be like; therefore, we rely on our fantasies about them, which include not only attitudes derived from academic exposure to computer science and casual intercourse with them in everyday life, including fictional accounts in films such as AI, but also our experience as software engineers and programmers actively augmenting their capabilities. This hypothesis follows directly from Ong's argument despite the fact that he, like Plato dismissing the musicians and poets, turns away from 'computer languages'. It is time to heed the words of Michael Heim, who in “Heidegger and McLuhah: The Computer as Component” exclaims that scholarship needs a 'cybersage' more than reproductions of Heidegger at his Schreibstube (writing desk) writing books.
Monday, September 8, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment