Wednesday, May 15, 2013
The philosophy of computing and philosophy of programming are both uninstantiated as explicit critical disciplines. A default concretizes around humanities philosophies of technology in academic circles and engineering philosophies of technology in technical communities. Programming philosophers are hard to find when they work in industry rather than the humanities; their private musings might be found in source code comments, though most are uttered in strange languages like C++, sometimes doubly occluded on account of nondisclosure and superseded revisions. We cannot shun study of our machine others and maintain understanding of how they, and we ourselves, work. The consequence of this avoidance is that our intellectual trajectory has decelerated to bifurcating strata of humans getting dumber and machines getting smarter. Critical programming is a digital humanities practice that privileges working code, both acts of software engineering to produce research results, and running iterative versions of those programming products to enact scholarship, research, and performance. It extends Software Studies, Critical Code Studies, and foregrounds awareness of computing systems and architectures promoted by Platform Studies, emphasizing the materiality of code over the abstract mathematical representations of algorithms. Surpassing consumer comportment characteristic of print bound thinkers, programming philosophers write the code for their own language machines. With a nod to Nietzsche, this is how one may philosophize with programming.
Monday, November 28, 2011
From coursework students amass material that forms long term memories. Online learning management systems often restrict access to course content soon after the semester ends. I argue that the resulting fragile knowledge is the modern counterpart of shallow knowledge that has been the bane of writing since its inception. An examination of popular learning management systems like Blackboard and the University of Central Florida Webcourses implementation reveals tactics that students, instructors, and software developers may deploy to ensure long term retention of course materials. Lessons learned from electronic portfolios and user-centered design inform a free, open source software project that uses the Moodle course management system as the backbone for individual repositories of not only the content of online courses, but also the situated context in which the content was created.
How do students remember what they learned in a course after the semester is over, and perhaps decades later? Some content is internalized; other information may be recalled from textbooks, handouts, notes, and transcripts. Until recently, all of these records were stored on paper. Online courses, on the contrary, are digitally native, and pose new opportunities for facilitating the long term usability of course content. They also sport new hazards that may result in spotty, fragile knowledge, and potentially complete forgetting, as digital records are deleted or become inaccessible, or are stored in formats that become deprecated. For instance, we presently accept PDF, HTML plus CSS, and XML as file formats for our data.
That is, we are cool with having to save stuff from websites as these kinds of files. However, the default settings encourage long term forgetting by short term neglect of exporting before end of semester and start of next. That is, most LMS users are not encouraged to archive discussion and email before the start of the next semester, at which point they become fragile knowledge, regardless of whether the computer languages encoding them persist. Consider these requirements:
I want to be able to recall coursework after the semester is over (long term).
I want to be able to remember everything I learned in every course of my college education, whether it happened in face to face classrooms or online (comprehensive).
I want to be able to access that knowledge at any time, with minimal preparation and no incremental or maintenance cost, indefinitely (automatically recalled).
I have confidence in the preservation of my prior years of course work before online courses, for I have written notes and kept annotated books used in them. They can be scanned into electronic formats, run through optical character recognition systems, turned into movies as long as I still have the file cabinet of notes and bookcase of notebooks. How about what I am learning online, am I taking adequate steps to preserve it for reuse in ten, perhaps twenty years, when I am deploying my retirement job plan?
In Why Don't Students Like School? cognitive scientist Daniel T. Willingham presents a popular model of human cognition with three parts: environment, working memory, and long-term memory (Willingham; Baddeley). This model, which localizes memory as a phenomenon within the human brain, leaves questions about how the environment, especially the Internet and other storage media, sustains brain-embedded long term memory, and in turn, fosters the growth of deep knowledge of a subject. Willingham's model of forgetting, reproduced in Figure 1, is symptomatic of a lack of consideration of the role played by machines in human cognition (55).
Figure 1: Willingham's model of forgetting suggests environment-based componentss of LTM mediated by working memory (55).
It must be assumed that
the mind exerts its influence on the environment. Yet embodied action besides unidirectional awareness is absent from the model. That is, there is no arrow going from working or long term memory towards in the sense of influencing, having an effect upon, intentionally controlling, the environment, as in Figure 1b, and it is here that we discover our hidden, forgotten relation to electronic computing machinery, and the potential to be creators of the very machinery.
Kittler is correct in stating, “Understanding media – despite McLuhan's title – remains an impossibility precisely because the dominant information technologies of the day control all understanding and its illusion. . . . What counts are not the messages or the content with which they equip so-called souls for the duration of a technological era, but rather (and in strict accordance with McLuhan) their circuits, the very schematism of perceptibility” (xl-xli).
In a book-based cognitive milieu, human action brings forward pages before the eyes for attentive reading, that occurs in working memory, the “site of awareness and thinking” (Willingham 55). What is learned in order to be remembered is perceived through reading or hearing by viewing graphic media (papers, books) motionless reading, reading them aloud, or listening to them be read. In a computer-based cognitive milieu, human action brings forward pages for attentive reading in working memory through the agency of programmed control in addition to print-based media. It is getting things out of the computers for appropriate attentive awareness that new problems for thinking exist. Theories of extended cognition include the active roles performed by inanimate objects and processes external to the human brain to be parts of our thought processes, and therefore may play a role in long term memory (Clark; Clark and Chalmers). Thus the role played by the environment goes beyond supplying props that serve as reminders for human brains to recall memories from their depths. This opens space to cast the problem of knowledge that traditionally divides among rote, shallow, and deep knowledge typical to humans with a fragile component as well. The term 'fragile knowledge' appears in research to describe some of the problems students have learning to program computers, identified as “knowledge that is partial, hard to access, and often misused” (Perkins and Ray 2). While computer-based information systems boast many features unavailable to printed records, they can also be more fragile than hard copies, especially in the case of content generated from online course work. Research suggests that, in addition to segregating content within an enclosed, proprietary software system, few, if any, Learning Management Systems (LMS) encourage users to save content before it becomes hard to access (Jafari, McGee, and Carmean). Thus, the outcome of online learning may often be fragile knowledge. How material is saved to long-term 'off-site' memory depends on the degree to which the operation is automated, in which the LMS delivers records to each learner as an end-of-semester zipped archive via email, for example, and the degree to which the learner actively saves them from within the learning portal. The University of Central Florida LMS, UCF Webcourses, only generates text files and copies of attachments from email and discussion threads when requested by the user. However, a mandatory online training program urges instructors to compile discussion groups, student records, and mail when the course ends, and provides step by step instructions. Ironically, instructors are not urged to remind their students to do the same. Moreover, course materials displayed as HTML frames and Flash animations are not easily saved. Furthermore, access to the course is blocked a few days after the end of the semester. As it will be explained in this text, the system-centered design of UCF Webcourses precludes optimizing recollection from the export file through representation in a situated context meaningful to the student. Instructors, on the other hand, are provided with instructions on how to create a course backup that captures the look and feel of the course interface they set up. Instructors are limited to recontextualizing the course content when the backup is loaded back into Webcourses and they can access it. This backup file is in a compressed, proprietary format that is useless until loaded back into same UCF Webcourses system from which it was generated. I can imagine a more open system in which instructors can then load the backup into their personal Webcourses-equivalent system for planning future courses, sharing with colleagues, or transporting to another environment. If students could also save the course skeleton, along with the data exported from discussions and email, then they could reanimate it, sustain it, and have it ready at hand indefinitely. This ideal fulfills the promise of lifelong learning where online coursework seems to fail in comparison to traditional, paper-based learning.
Given that ideal scenario, what is feasible? What long-term archiving options for course content do other popular learning management systems offer? What recommendations can be made to software designers and configuration administrators to facilitate creation of long-term memories from online coursework? Finally, what best practices may students be advised to immediately adopt to maximize the lifetime value of their educational investments? This topic has not been explicitly addressed as a research topic; however, related studies on the fragility of knowledge sharing in distributed communities may be applicable (Gächter, Von Krogh, and Haefliger; Jones). General studies of LMS provide hints at how to identify and overcome the fragility of knowledge created in online course work, especially those that focus on user-centered design and best practices (Selfe; Jafari, McGee, and Carmean; Blythe; Clark and Mayer). Electronic portfolios (ePortfolios or EPs) form a third area in which useful ideas for bolstering fragile knowledge from online coursework (Whithaus; Estes; Indiana University; Barrett and Abrami; Bas and Eynon; Love, McKean, and Gathercoal). “The potential of EPs are nothing short of revolutionary as a dramatic expression of the possibilities of e-learning from cradle to grave as epitomized in the slogan 'E-portfolio for Life'” (Abrami and Barrett).
There is a long tradition going back to Plato's Phaedrus criticizing print-based knowledge of potential shallowness (Baron). Written symbols, like painted figures, are mute, fixed in visible space. Humans remember what they have read, and are able to go back to the original source, if they know what they are thinking about. If not, they are just fooling themselves, and their knowledge is shoddy, shallow, defective (Plato 561-567). Willingham formalizes the ancient Greek notion by distinguishing between rote, shallow, and deep knowledge, asserting that “understanding is remembering in disguise” (88). Shallow knowledge relates to incomplete understanding of the material in question, but the onus is on the human to learn more and keep practicing the knowledge already gained, for which reading is the best alternative in a busy world. Background knowledge requires quick recall from long term memory to sustain thought in working memory. “As far as anyone knows, the only way to develop mental facility is to repeat the target process again and again” (115). The mental techniques of chunking and process automatization provide means of stretching the capabilities of the fixed, innate working memory each human possesses. Willingham does not venture into explaining the role in augmenting intelligence that the built environment may play, especially the active, dynamically aware artificial intelligence of internetworked electronic computing machinery. Some of the awkwardness of Willingham's theory can be avoided by loosening the strict separation at the boundary of the human nervous system as the source of working and long-term memory. For over a decade, Andy Clark and David Chalmers have been arguing for 'the extended mind' in favor of the strict division of mind and external environment. Extended cognition acknowledges that cognition can be borne by devices and in the very structure of the environment as well as that which we limit to human brain activity, similar to the main idea of Donald Norman's Design of Everyday Things. Active externalism opens the study of the distributed cognitive environment that includes the embodiment of external computing resources:
In effect, explanatory methods that might once have been thought appropriate only for the analysis of 'inner' processes are now being adapted for the study of the outer, and there is promise that our understanding of cognition will become richer for it. . . . Does the extended mind imply an extended self? It seems so. . . . To consistently resist this conclusion, we would have to shrink the self into a mere bundle of occurrent states, severely threatening its deep psychological continuity. Far better to take the broader view, and see agents themselves as spread into the world. (Clark 10-18)
Extending cognition into the built environment may also require rethinking the way working memory fetches information from long-term memory to account for the work done by external cognitive mechanisms, such as 'the Internet'. Where critics like Baron extend the question of trustworthiness of writing to electronic media, my focus is on 'shelf life' of content exposed in an online course (132). What connects this discussion to online coursework is the risk to easy recollection, and threat of permanent loss, of memories supported by active externalism.
The term 'fragile knowledge' appears in research to describe some of the problems students have learning to program computers, identified as “knowledge that is partial, hard to access, and often misused . . . the person sort of knows, has some fragments, can make some moves, has a notion, without being able to marshal enough knowledge with sufficient precision to carry a problem through to a clean solution” (Perkins and Ray 213-214). Although similar to the critique of writing found in Phaedrus, in which it is argued that “written words are of [no] use except to remind him who knows the matter about which they are written,” fragility arises from the idiosyncrasies of the medium itself (565). Computer-based information systems boast many features unavailable to printed records, but they can also be more fragile than hard copies, particularly when system-centric design decisions foreclose the ability of users to manipulate the system to meet their personal needs. While the problem seems obvious, the fact that a premier institution like the University of Central Florida fails to safeguard its students' educational accomplishments beyond the close of each semester points to social and cultural, not just technological, roots to the problem.
Cynthia Selfe emphasizes the importance of situated knowledge approaches and user-centered design in Technology and Literacy in the Twenty-First Century, invoking Donna Haraway's 'coyote' way of knowing and Andrew Feenberg's insight into “considering such sites in terms of their underdetermined potential for political, economic, and social change—a potential which can be exploited by interested and knowledgeable social agents determined to make a difference” (154). Writing in the early days of online coursework, she implicitly recommends community-driven, user-centric, open standards, open source approaches to computer-based communications facilities. Feenberg describes the situation as one in which “the technical code is the most general rule of the game, biasing the play toward the dominant contestant. . . . Tactics thus differ from outright opposition in that they subvert the dominant codes from within by introducing various unexpected delays, combinations, and ironies into the application of strategies” (113). Casting this problem into the language of strategies and tactics in the asymmetrical power relations between technological innovators and subjugated users explained by Feenberg, I may propose options that seem subversive and may verge on prohibited by university policy or LMS End User License Agreement (EULA).
Nonetheless, they represent the boundary on which local, situated solutions of the sort recommended by Selfe form as communications between instructors and students. Other research into the fragility of knowledge sharing between human groups examines the forms of innovation, and the typical dynamics that arise under each.
In the private-investment model, innovators privately fund innovation and then use intellectual property protection mechanisms to appropriate returns from these investments. In the collective-action model, public subsidy funds public goods innovations, characterized by non-rivalry and non-exclusivity in using these innovations. Recently, these models have been compounded in the private-collective innovation model where innovators privately fund public goods innovations. Private-collective innovation is illustrated in the case of open source software development. (Gächter, von Krough, Haefliger 893)
Most learning management systems are produced under the private-investment model. While actions can be taken to rhetorically motivate policy change for both preserving and permitting access to online course work environment—via the sort of 'coyote knowing' Selfe recommends—practical software solutions can be designed and implemented using free, open source software to replicate undependable public higher education resources. That is, community managed software projects such as those hosted on Sourceforge.net, referred to above as 'private-collective innovation', have the potential to support a lifetime knowledge repository for learners as much as the public higher education institutions of various states and countries in which formal learning initially occurs. But first, I want to further define the problem by examining research on existing learning management systems, with the hope to find insight on their archival features.
Stephanie J. Coopman analyzed the Blackboard LMS, which dominates the industry in part through its acquisition of competitors WebCT and ANGEL, because “little research has examined how learning management systems structure participants' experiences and replicate or diverge from traditional pedagogy” (1). Her article does not address the archival features of any of these systems, but is relevant nonetheless when participants' experiences include future activities intent on recollection of knowledge gleaned from pedagogy. It focuses on the implications of design decisions evident in LMS on communications among students, and between teachers and students. Her insistence that with performance metaphors, “knowledge becomes a process involving all learners (including instructors), rather than an object or thing produced by instructors for students.” In contrast, when a learning management system (LMS) emphasizes textual metaphors, the collaborative, dynamic potential of an online course may be diminished. This point, however, can be extended to the topic of creating long term memories from online coursework because, with performance-oriented, collaborative metaphors, students may be encouraged to take more responsibility in ensuring that they are creating knowledge from their coursework, rather than just consuming the texts produced by the teacher. Moreover, Coopman's suggestion that “blogs might hold the greatest potential for breaking out of the traditional discussion board mode” because of their stronger integration of multimedia context, and potential reach outside the confines of the LMS environment, again points in the direction of migrating content from the LMS to other, more persistent and readily accessible virtual locations. Coopman concludes her study with a critique of the hierarchical, system-centric design of Blackboard, in which control over the user interface, features, and policies of the LMS are in the hands of designers, engineers, marketers, and university administrators, leading to a “black-box effect to the infrastructure of Blackboard Inc.'s learning management systems.” She contrasts this student-as-user model to open source software systems like Moodle, which “allows tech-saavy faculty to actively participate in refining the course delivery platform,” thus allowing instructors and students more of a voice, if not an active role, in the evolution of their institution's LMS.
Moodle is a software package for producing Internet-based courses and web sites. It is a global development project designed to support a social constructionist framework of education. Moodle is provided freely as Open Source software (under the GNU Public License). . . . Moodle can be installed on any computer that can run PHP, and can support an SQL type database (for example MySQL). It can be run on Windows and Mac operating systems and many flavors of linux (for example Red Hat or Debian GNU). . . . The word Moodle was originally an acronym for Modular Object-Oriented Dynamic Learning Environment, which is mostly useful to programmers and education theorists. It's also a verb that describes the process of lazily meandering through something, doing things as it occurs to you to do them, an enjoyable tinkering that often leads to insight and creativity. (Moodle.org)
Despite Moodle's self declaration as a lazily meandering process, its constitution as a global, private-collective, free, open source software project strengthens the otherwise subjugated student consumer user experience of private-investment, proprietary, commercial LMS providers. Awareness and change seems more likely to happen in less hierarchical, more user-centered systems.
User-centered design is the subject of Stuart Blythe's earlier work in the design of online courses, before the unified LMS swallowed the individual efforts of instructors deploying a mixture of individual technology systems like web pages, email, and chat. He criticizes technology designs modeled on academic discourse because “curriculum is designed according to formal specifications (procedures) and consideration of existing technologies (published materials), rather than than apparent examination of users' experiences with them.” (334). The goals of the segregated university course are bounded by the formal specification of the academic calendar, and terminates as soon as the letter grade is assigned. Under existing technologies – published materials, written text, paper – it is assumed that students collect course handouts, take notes, and obtain the assigned texts. No thought is really given to what students do with those materials after the course is over; however, it can be assumed that these printed materials will persist for the long term, provided the student does not throw them away or sell the books (throwing away notes is more serious since obsolete versions of textbooks can be easily obtained). According to Blythe's reasoning for user-centered design, “we need to understand ways that our own students at our own campuses and in our own classes produce the knowledge necessary to succeed in a Web-based course. Otherwise, we risk designing unusable courses.” (336). The scope of this understanding needs to extend beyond the scope that is the concern of the system-centered view, to the scope of creating deep knowledge through solid, long-term memories of the course content. The system-centered LMS spits out an export file without keys to recreating the context that makes the content meaningful.
In “Managing Courses, Defining Learning,” authors Ali Jarai, Patricia McGee, and Colleen Carmean note that “transportability, or the ability to move content between systems, was seen by administrators as a system necessity at the institutional level and as a current weakness at the level of institutional data and the individual user” (52). Therefore, they recommend that any next-generation LMS should explicitly address issues of archives and storage: “Students want to be able to access and store content over the duration of their degree work, to have access to material for all their courses in one location . . . [and] to be able to return to a former course and locate materials and resources that were useful to them” (56). The design of most systems, both commercial and open-source, is based on frameworks developed in the mid-to-late 1990s that segregate data within servers located on campus, limiting access to current faculty and students, and typically for the limited duration of the academic semester a course is active. “Thus there is a need for a personal L/CMS, something that establishes access above and beyond current institutional systems” (64). Based on personal experience in the design architecture of Oncourse and ANGEL, the authors propose the 'Jafari model', whose five design requirements are “lifelong, outsourced, global, comprehensive, and smart” (66). The system relies on distributed web services that inter-operate with existing applications such as WebCT, ANGEL, Facebook, MERLOT, and others. It is learner-centric rather than course-centric, “with the learner's e-portfolio being the foundation and the connecting point to the system. . . . [L]earners no longer need to worry about the interruption of access to their learning accomplishments and collections, including e-portfolios, after leaving campus or about whether a campus will end maintenance of their learning and portfolio collections” (66). Figure 2 depicts the proposed architecture, including its integration with existing educational and social networking technologies.
Figure 2: The Jafari Model integrates the LMS into a heterogeneous digital ecosystem where data sharing is implied (Jafari, McGee, and Carmean 68).As appealing as the Jafari model for a future learning management system may be, the reality is that institutions have already invested a great deal of resources standardizing on an existing system that restricts access and does not allow customization beyond what its interface settings permit.1 Layering additional information systems alongside the learning management system may be a more feasible solution to improving the hardiness of knowledge based on online course content. Electronic portfolios are gaining popularity as a means for longitudinal, distributed evaluation of student work in addition to or as replacements for standardized, high-stakes testing (Whithaus; Love, McKean, and Gathercoal). As Carl Whithaus opines in Teaching and Evaluating Writing in the Age of Computers and High-Stakes Testing:
These new evaluation systems will acknowledge that students must learn how to become more effective communicators through interacting with others. Describing what and how the students learn through using multimodal and multimedia skills will replace focusing on deficits judged by outdated print-based standards. Finally, distributing a work to multiple readers means that various audiences will read student compositions and judge student skill levels and competencies for particular purposes. A single electronic portfolio will represent students' skills across disciplines because it will contain multiple genres. (150)
To bring this vision to fruition, argue Douglas Love, Gerry McKean, and Paul Gathercoal in “Portfolios to Webfolios and Beyond: Levels of Maturation,” institutions must progress through five stages of process maturity in their approach to storing and evaluating student work, from rudimentary, scrapbook-like, printed portfolios to an ideal level at which assessment, evaluation, and reporting is based on authentic evidence that involves students, teachers, and other evaluators, including potential employers. At the third level, dubbed 'Curriculum Collaboration Between Student and Faculty', “employers can view the student's showcase portfolio, including contextual clues from the institution, syllabi, assignments, help, resources, and assessment criteria” (30). At the fourth level, 'Mentoring Leading to Mastery', “additional heuristic value comes from the student's ability to generate her or his own portals for displaying work samples and achievements” (31). In parallel fashion, Whithaus differentiates between database-driven ePortfolio systems, which arrange and present a students' work following programmed patterns, and design-driven systems in which the thoughtful organization and presentation of the portfolio contents by the student forms an integral part of the demonstration of mastery (15).
The connecting thread is that such ePortfolio systems allow students to gather and store not only their compositions, but the real world context in which the work is situated, the aforementioned “contextual clues from the institution.” Additionally, “they may scaffold attempts at knowledge construction” (Abrami and Barrett). Making a connection to the maturation levels of ePortfolio systems suggests that a LMS may also exhibit different degrees of maturation with respect to not only how it presents information to the student within the course portal, but also how it presents information to the student after the course is finished, that 'cradle to grave' perspective. In this context, a LMS that has a function for exporting a threaded discussion as a text file provides less support of scaffolding knowledge construction than one that integrates the discussion into the overall course syllabus, related assignments, lecture notes, and readings, all of which are animated in an always available lifelong learning management system. The pressure is on LMS designers to provide either long-term access to course content, or streamlined tools for exporting course content to other systems, such as ePortfolios, and on university administrators to allow students to utilize such facilities.
Where do we go from here? All users—students taking online courses, and instructors teaching online courses—should take the immediate corrective actions of utilizing all opportunities to export dynamic course content, such as discussions and email, and copying context-defining content such as syllabus, calendar, announcements to a storage location that will not be affected by the administrative policies built into our institution's learning management systems (UCF Webcourses). My long term recommendation is for the automatization of services otherwise requiring 'manual' (that is, intentional) human intervention, for example manipulating a web browser to save content from online coursework out of the care of proprietary learning management systems so they can be reanimated. Until Webcourses permits students to save a backup of the course layout as it does instructors, into a format that can be easily imported into another LMS, then a bit of subversive sharing is in order. Feenberg tactics become instructor sharing Webcourses backup with students who run custom software that translates it into Moodle version of the course for lifelong retention (see Note 3). Of course, if this option is prohibited by university policy or LMS EULA, then students will have to recombine manually exported content from Webcourses to reform the course structure and content not exportable via the student interface via Moodle's user interface. As a long term strategy, students and instructors possessing the programming skills and time can develop free, open source software (FOSS) projects that together implement a lifelong learning management system that can readily be used to save content from all popular learning management systems in use by accredited universities. The term FOSS is typically associated with software licensed under the GNU General Public License (GPL) developed by Richard Stallman. Known as copyleft, the license ensures that four freedoms are maintained: the freedom to run the program for any purpose, the freedom to modify the source code (which entails having the source code readily available), the freedom to redistribute copies of the program, and the freedom to distribute modified versions of the program along with the modified source code (Stallman, 18).
This 'Jafari model' inspired vision (Figure 2), from which technical requirements may be specified, includes what I have proposed above as “scaffolding knowledge construction” that integrates the exported discussion and email into the overall course syllabus, related assignments, lecture notes, and readings, all of which are animated in an always available lifelong learning management system. Recall the three criteria I specified at the beginning: the solution must be long term, comprehensive, automatically recalled. A 'hacker-grade', pre-consumer model meets these overall requirements in the following system integration:
Moodle API integration offloads background LMS to a reliable third-party FOSS project.
Common LMS attributes (static syllabus, calendar, materials, assignments, dynamic announcements, discussions, email, etc) handled by Moodle to reproduce context of original online course.
MySQL provides shared database for use by third-party services (other free, open source project applications) and custom programs.
Third-party and custom project code parses backup file from Webcourses and imports into Moodle to recreate context of the course (syllabus, calendar, announcements, discussion group sequence, and additional content not archived by student interface). Otherwise user reconstructs his or her representation of the original online course by manually creating structure and copying content from Webcourses into the Moodle interface.
Third-party and custom project code parses exported discussion and email transcripts from Webcourses and imports into Moodle.
Third-party and project maintenance tasks refresh hyperlinks stored in Moodle data to ensure long term soundness.
The project can be started immediately in the Sourceforge.net free, open source software development community, using the author's current UCF Webcourses for supplying exported content. The combined lifelong learning management system begins to sensibly store online course content in a strong form that can be revisited for decades. Preserving the content and context remediates fragile knowledge trapped in the original LMS. Students create scaffolding for long term memories by reconstructing representations of their online courses in their own LMS. Using of formal course documents like the syllabus and calendar, communication mechanisms like assignments and email, and discussions, to support remembering the context of their learning, and leveraging the existing export capabilities of popular learning management systems to facilitate the transfer of dynamic content generated in discussions, blogs, email. Hyperlinks are reminders introduced in ancient information technology media, which like their counterparts in images and writing, have structural fragility. A language may be forgotten or a book may be lost. DNS changes, whims of remote archive maintenance, and other forms of 404 errors destroy memories of online learning. Maintaining the integrity of hyperlinks prevents externally, indirectly stored, reminder-based extended cognition from becoming fragile in the future, and consequently enfeebling its human host. Once the project has been established and users begin to employ it, other useful enhancements will arise through feedback forums associated with the Sourceforge.net project. Even if the proposed solution ultimately fails to provide added value to the default exported data files from the original LMS, the exercise itself is a useful example of a user-centered design initiative that may spur change in the hierarchically controlled, proprietary systems that currently dominate the online education market.
Baddeley, A. Working Memory, Thought, and Action. London: Oxford University Press. 2007. Print.
Barrett, Helen and P. C. Abrami, P.C. "Directions for research and development on electronic portfolio." Canadian Journal of Learning and Technology 31.3 (Fall 2005). Web. 10 Oct. 2011.
Bass, R. and B. Eynon. "Electronic Portfolios: A Path to the Future of Learning." Chronicle of Higher Education (March 18, 2009). Web. 1 Nov. 2011.
Blythe, Stuart. "Designing Online Courses: User-Centered Practices." Computers and Composition 18 (2001): 329-346. Print.
Clark, Andy. “An Embodied Cognitive Science?” Trends in Cognitive Sciences 3.9 (September 1999): 345-51. Print.
Clark, Andy and David Chalmers. “The Extended Mind” Analysis 58.1 (January 1998): 7-19. Print.
Coopman, Stephanie J. “A Critical Examination of Blackboard's E-Learning Environment.” First Monday 14.6 (June 1, 2009). Web. 9 Nov. 2011.
Croy, Marvin and Ron Smelser. “Report to the Provost From the Learning Management System Evaluation Committee.” Charlotte, NC: University of North Carolina at Charlotte. May 15, 2009. Web. 9 Nov. 2011.
Estes, Ashley. "ePortfolios Help Students Track Progress." Virginia Tech Innovations (2010). Web. 1 Nov. 2011.
Feenberg, Andrew. Questioning Technology. New York: Routledge, 1999. Print.
Gächter, Simon, Georg Von Krogh, Stephan Haefliger. “Initiating Private-Collective Innovation: The Fragility of Knowledge Sharing.” Research Policy 39.7 (2010) : 893-906.
Indiana University. "Description of Forthcoming Version 2.0 of the Open Source Portfolio." Web. 10 Oct. 2011.
Jafari, Ali, Patricia McGee, and Colleen Carmean. "Managing Courses, Defining Learning: What Faculty, Students, and Administrators Want." EDUCAUSE Review 41.4 (2006): 50-70.
Jones, David J. “‘Vanished Like a Dream’: Traditional and Other Fragile Knowledge in the Global Village.” Second Australian Universities International Alumni Convention, August 2000. Web. 10 Oct. 2011.
Love, Douglas, Gerry McKean, and Paul Gathercoal. “Portfolios to Webfolios and Beyond: Levels of Maturation.” Educause Quarterly 27.2 (2004). Web. 10 Oct. 2011.
Moodle.org. “About Moodle.” 25 Oct. 2011. Web. 23 Nov. 2011.
Nicerson, R. S. and M. J. Adams. “Long-Term Memory for a Common Object.” Cognitive Psychology 11 (1979): 287-307. Print.
Perkins, David and Martin Ray. “Fragile Knowledge and Neglected Strategies in Novice Programmers.” Empirical Studies of Programmers: Papers Presented at the First Workshop on Empirical Studies of Programmers, June 5-6, 1986, Washington, D.C. Norwood, NJ: Ablex Publishing Corp., 1986. Print.
Perkins, David N., S. Schwartz, and R. Simons. “Instructional Strategies for Novice Programmers.” Teaching and learning computer programming: Multiple research perspectives. Ed. Richard E. Mayer. Hillsdale, N.J: L. Erlbaum Associates. 1988. Print.
Plato, , Harold N. Fowler, W R. M. Lamb, and Robert G. Bury. Plato: With an English Translation. London: W. Heinemann, 1917. Print.
Stallman, Richard M.. Free Software, Free Society: Selected Essay of Richard M. Stallman. Boston: GNU Press, 2002. Print.
Selfe, Cynthia L. Technology and Literacy in the Twenty-First Century: The Importance of Paying Attention. Carbondale, IL:Southern Illinois University Press, 1999. Print.
University of Central Florida. “Policy Number 2-103.1 Use of Copyrighted Material.” 12 May 2010. Web. 28 Nov. 2011.
University of Central Florida. “Webcourses@UCF Lab Activity.” Web. 19 Nov. 2011.
Whithaus, Carl. Teaching and Evaluating in the Age of Computers and High-Stakes Testing. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. 2005. Print.
Willingham, Daniel T. Why Don't Students Like School?: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom. San Francisco, CA: Jossey-Bass, 2009. Print.
1In rare occasions an institution may change to a free, open source LMS whose exports students import into their own instances of the LMS. Typically, the institution uses a proprietary system like UCF Webcourses that exports an indecipherable wad of binary data as a backup method. It would have to be reverse-engineered to be read into a Moodle system as if from a Moodle backup file. This will be a key conversion to implement in any software project that attempts to do this. Such 'coyote-thinking' may violate EULA or even United States law.
Monday, November 7, 2011
What is it? It is a global free, open source software development community.
Define FOSS: four freedoms, GPL, FDL, CC, shareware, freeware
Teachers & Students
Exploring Sourceforge: Find useful software: long tail, bazaar
Exercise: find projects related to other presentation topics [4-5 minutes]
Example: text to speech (examine espeak file speak_lib.h)
Teachers & Students
Course Design Practices
Example: audio virtual reality electronic literature (symposia symposia.cpp)
Digital Humanities Assignment Component
Development (smaller percentage of students have commit rights)
Service Learning opportunities (majority of students document (Yeats))
Unrestricted Public Record of Achievement: For portfolios, applications, resumes
Teachers & Students
Raymond, Eric S. The Cathedral and the Bazaar. (Rev. ed.). Sebastopol, Calif.: O'Reilly, 2001.
Stallman, Richard. Free Software, Free Society: Selected Essays of Richard M. Stallman. Boston: GNU Press. 2010.
Yeats, David. “The Role for Technical Communicators in Open-Source Software Development.” Technical Communication 55.1 (February 2008): 38-44.
If there is one takeaway, it is to read Yeats and realize that in most classes only a small percentage will be programmers. And also that paste from Libre Office is flawed.
Wednesday, June 29, 2011
Notes for Thomas J. Misa Leonard to the Internet: Technology and Culture from the Renaissance to the Present
Establishes history and demonstrates methodology more so than offers theory.
(x) The Renaissance court system was the conceptual key. . . . The technical projects they commissioned from the Florence cathedral to the mechanical robots for courtly entertainment, as well as the printed works on science, history, philosophy, religion, and technology, created and themselves constituted Renaissance culture.
(x-xi) There are good reasons to see the industrial revolution as a watershed in world history, but our time-worn inclination to seize on industrial technologies as the only ones that really matter has confounded a proper understanding of the great commercial expansion that followed the Renaissance. . . . I began not only to think of technologies as located historically and spatially in a particular society and shaped by that society's ideas of what was possible or desirable, but also to see how these technologies evolved to shape the society's social and cultural developments. To capture this two-way influence, I look up the notion of distinct “eras” of technology and culture as a way of organizing the material for this book.
Compare to Kittler whom Hayles criticizes for emphasizing military technologies. We are in the age where electronic technologies are now central to interpretation.
(xi) If technologies come from outside, the only critical agency open to us is slowing down their inevitable triumph – a rearguard action at best. By contrast, if technologies come from within society and are products of on-going social processes, we can, in principle alter them – at least modestly – even as they change us.
The participant culture, in principle, although the default comportment of consumer (spectator) is justified by Zizek.
(xii) Beyond Britain, commentators and technologist sometimes looked to copy British models of industry but more frequently adapted industrial technologies to their own economic and social contexts. The result was a variety of paths through the industrial revolution.
(xii) The legacy of the industrial revolution, it seemed, was not a single “industrial society” with a fixed relationship to technology but rather a multidimensional society with a variety of purposes for technology.
(xii) The first of these technology-intensive activities to fully flower was empire building, the effort by Europeans and North Americans to extend economic and political control over wide stretches of land abroad or at home.
He gives interesting accounts of British empire building in India but little detail about American internal activity.
(xiii) A second impulse in technology gathering force from the 1870s onward lay in the application of science to industry and the building of large systems of technology.
(xiv) The achievement of mass-produced steel, glass, and other “modern materials” around 1900 reshaped the aesthetic experience of working or walking in our cities and living in our homes.
(xiv) Technology has been and can be a potent agent in disciplining and dominating. I also discuss the modernists' troubling embrace of a fixed “method” of creativity.
(xiv) In the Cold War decades, scientists and engineers learned that the military services had the deepest pockets of all potential technology patrons.
(xv) The hardest history to write is that of our own time, and yet I believe that “globalization,” or “global culture,” is a force that oriented technology and society in the final three decades of the twentieth century.
(xvi) My corollary [to Moore's Law] states that the size of computer operating systems and software applications has doubled at the same pace as the operational speed of computer chips, soaking up the presumed power of the hardware and blunting its impact.
Do we have any better use for that power as consumers? Does it just mean we would have had internet based television sooner?
(xvii) It is not so much that our technologies are changing especially quickly but that our sense of what is “normal,” about technology and society, cannot keep pace.
(xvii) These eras appear to be shortening: the Renaissance spanned nearly two centuries, while the twentieth century alone saw the eras of science and systems, modernism, war, and global culture. It is worth mentioning a quickening also in the self-awareness of societies – our capacities to recognize and comprehend change are themselves changing. . . . This self-awareness of major historical change is clearly an instance of “reflexive” modernization in sociologist Ulrich Beck's sense. In this way, then, these eras do capture something real in our historical experience.
Is Beck on the same level as Lacan? McLuhan, Ong, and others recognized this quickening of awareness.
Technologies of the Court
(1) Whether from the Medici family or from his numerous other courtly patrons, Leonardo's career-building commissions were not as a painter, anatomist, or visionary inventor, as he is typically remembered today, but as a military engineer and architect.
Who are Leonardos of our recent era? Technology billionaires?
(3) Even the well-known history of movable-type printing needs to be reexamined in the light of pervasive court sponsorship of technical books and surprisingly wide court demand for religious publications.
We already are clever enough to examine Internet history in light of the triangle. Hayles develops are more nuanced and less deterministic narrative than Kittler whom she criticizes for focusing on war determining technological development.
The Career of a Court Engineer
(4-5) In addition to his work as an architect and sculptor, Brunelleschi was a pioneer in geometrical perspective, especially useful in capturing the three dimensionality of machines in a two-dimensional drawing. From Leonardo's notebooks it is clear that he mastered this crucial representational technique. . . . The multiple-view drawings, done in vivid geometrical perspective, are a signature feature of his notebooks.
(5) His notebooks from Milan are filled with drawings of crossbows, cannons, attack chariots, mobile bridges, firearms, and horses.
(8-9) While certainly not such exciting subjects as muskets or cannon, the varied means for attacking or defending a fortification were at the core of Renaissance-era warfare.
(9) It is often suggested that Leonardo chafed at having to design theatrical costumes, yet scholars have recently found evidence indicating the Leonardo also built moving stage platforms and settings – and perhaps even an articulated mechanical robot for these festivities.
(10) His fascination with self-acting mechanisms is also evident in Leonardo's many sketches of textile machines found in the surroundings of Milan.
Link Leonardo's fascination with autonomous artificial automata to von Neumann. (Here a timestamp operator would reveal a later reading.)
(13) The special character of technological creativity in the Renaissance resulted from one central fact: the city-states and courts that employed Leonardo and his fellow engineers were scarcely interested in the technologies of industry or commerce. Their dreams and desires focused the era's technologists on warfare, city building, courtly entertainments, and dynastic displays. . . . The intellectual resources and social dynamics of this technological community drew on and helped create Renaissance court culture.
(13) Foremost among these intellectual resources was the distinctive three-dimensionality and depth of Renaissance art and engineering.
(14) Leading Florentine artists such as Massaccio were already practicing something like linear perspective a decade or more before Alberti's famous treatise On Painting (1436).
(14) Durer's most famous “object,” illustrating his 1525 treatise on geometry and perspective and reproduced widely ever since, was a naked woman on her back, suggesting that perspective was not merely about accurately representing the world but about giving the (male) artist power over it.
Throwing a bone to feminists and liberal studies?
(16) Leonardo even copied may of Alberti's distinctive phrases. It is Alberti's ideas we are reading when Leonardo writes that the perspective picture should look as thought it were drawn on a glass through which the objects are seen.
(17) Close study of the two men's notebooks has revealed that Francesco was one source of designs for machines and devices that had previously been attributed to Leonardo alone.
(17-18) In a curious way, the presence of Leonardo's voluminous notebooks has helped obscure the breadth and depth of the Renaissance technical community, because researchers overzealously attributed all the designs in them to him. . . . Scholars believe that about one-third (6000 pages) of Leonardo's original corpus has been recovered; these papers constitute the most detailed documentation we have on Renaissance technology. . . . His notebooks record at least four distinct types of technical projects: his specific commissions from courtly patrons; his own technological “dreams,” or devices that were then impossible to build; his empirical and theoretical studies; and devices he had seen while traveling or had heard about from fellow engineers; as well as “quotations” from earlier authors, including Vitruvious.
(18) Perhaps the most distinctive aspect of Leonardo's career was hist systematic experimentation, evident in his notebooks especially after 1500. . . . Some objects of Leonardo's systematic investigations were gears, statics, and fluid flow.
(19) The first several generations of printers as well as the best-known early technological authors were, to a surprising extent, dependent on and participants in late-Renaissance court culture.
(19-20) Movable type was also “first” developed in the Far East, centuries before Gutenberg. . . . The first truly movable type is credited to Pi Sheng (1041-48), who engraved individual characters in clay, fired them, and then assembled them on a frame for printing.
(20) Islam permitted handwriting the words of Allah on paper but for many years forbade its mechanical printing. The first Arabic-language book printed in Cairo, Egypt, did not appear until 1825.
(22) Gutenberg's principal inventions were the adjustable mold for casting type and a suitable metal alloy for the type.
(22) Printing traveled quickly.
(22-23) The printing press made a little-known German theology professor named Martin Luther into a best-selling author and helped usher in the Protestant Reformation. . . . Yet printers sensed a huge market for his work and quickly made bootleg copies in Latin, German, and other vernacular languages to fill it. It was said that Luther's theses were known across Germany in two weeks and across Europe in a month. . . . Eventually, Luther himself hailed printing as “God's highest and extremest act of grace, whereby the business of the Gospel is driven forward.”
Compare to Busa's praise of magnetic tape.
(23) The Protestant movement's emphasis on individuals' reading the Bible themselves required a massive printing effort. Whatever their personal believes, printers thus had material reasons to support Protestantism.
(23) Although it is tempting to see printers as proto-capitalists – owing to their strong market orientation and substantial capital needs – their livelihood owed much to the patronage and politics of the court system.
(25) Plantin's massive output suggests the huge scale of book production at the time. In the first fifty years of printing (1450s-1500) eight million books were produced in Europe. . . . This economy of scale sharply reduced the cost of books, which meant that one scholar could have at hand multiple copies from several scholarly traditions, inviting comparison and evaluation. Eisenstein writes, “Not only was confidence in old theories weakened, but an enriched reading matter also encourage the development of new intellectual combinations and permutations.” In this way, the availability of vastly more and radically cheaper information led to fundamental changes in scholarship and learning.
Print humanities were born. Compare to relative scarcity and then proliferation of electronic computing machinery.
Technology and Tradition
(26) Transfer of technology before the Renaissance could be hit-or-miss. Machines invented in one time, or place, might well need to be rediscovered or even reinvented. Indeed, something very much like this occurred, after the great technological advances of Song China (960-1279).
(26) Yet these pioneering Chinese technologies were not reliably recorded with the rigorous geometrical perspective that allowed Renaissance engineers to set down their ideas about the crucial workings of machines.
Importance of having technological tools to reflect upon technology.
(27) Eugene Ferguson, a leading engineer-historian, has brilliantly shown how quickly technical drawings might be corrupted, even in the West.
(28) In these terms a permanent and cumulative tradition in technology, enabled by the invention of printing and perspective, appeared first in central Europe's mining industry.
(29) Each of these three authors [Bringuccio, Agricola, Ercker] praised the values of complete-disclosure, precise description, and openness often associated with the “scientific revolution.” These books detailed the processes of mining, smelting, refining, founding, and assaying. Biringuccio and Agricola used extensive illustrations to convey the best technical practices of their time.
Value of open standards, technologies and licenses.
(31) The scientific revolution was also surprisingly dependent on printing technology and courtly patronage networks.
(32) The desires and dreams of Renaissance courts and city-states defined the character of the era's technology and much of the character of its culture.
Manovich two cultures. Consider microcomputer revolution as desires and dreams of late American capitalism.
Techniques of Commerce
(34) The age of commerce, anticipate in Spain and Portugal as well as in China and India, found its fullest expression during the seventeenth-century Golden Age of the Dutch Republic.
Technology and Trade
(37) The emergence of specialized ship designs in the Netherlands was another early signal that the Dutch understood how to bring technology and trade together in the pursuit of commerce.
(42) Impressed with how multiple-share ownership helped raise money and spread the risk of losses, the Dutch took the practice much further.
Creating Global Capitalism
(43) The Dutch – through their East India Company in the Pacific and West India Company in the Atlantic, coupled with the extensive trading in Europe and Africa – in effect created the first global economy.
(43) The commodity traders' guild began publishing weekly lists of prices in 1585. Within a few years, the Amsterdam commodity exchanges – for grain, salt, silks, sugar, and more – had surpassed their regional rivals and become a set of global exchanges.
(45) More to the point, tulip trading embodied several of the classic Dutch financial techniques, including futures contracts, commodity pricing, and multiple-share owndership.
(48) On the southeast coast of India and on the innumerable islands of what is now Indonesia, each of the trading countries sought to establish trading alliances; and when these alliances were betrayed, they tried unarmed trading “factories” (warehouse-like buildings where “factors” - traders – did business).
Interesting, unexpected etymology of factories.
(49) While the VOC [Verenigde Oostindische Compagnie] dealt with spices and cotton, the West India Company traded in slaves and sugar.
Little mention of the ethics of slave trade. See multimedia The Corporation. He is more interested in the difference between overall technological modes, ways of being, Tart's states, “major alterations in the way the mind functions” (1986, 4).
“The Great Traffic”
(51-52) Dutch preeminence came through the targeted processing and selective reexporting of the traded materials. . . . Indeed, high wages, relatively low volumes, and high-quality production typified the traffics, in sharp contrast with early industrial technologies, which emphasized low wages, high volumes, and low-quality production.
Compare Misa's differentiation between Dutch precision and British sloppy massive scale to McConnell's differentiation between systematic engineering and gold rush programming styles.
(55) Not only had Dutch traders captured commercial control over many key raw materials, including Spanish wool, Turkish mohair yarns, Swedish copper, and South American dyestuffs; the “traffic” system had also erected a superstructure of processing industries that added value to the flow of raw materials. The Dutch conditions of high wages and labor scarcity put a premium on mechanical innovation, the fruits of which were protected by patents. Another economic role taken on by the Dutch state (at the federal, state, and municipal levels) was the close regulation of industry in the form of setting standards for quality and for the packaging of goods.
(57) While choosing, developing, and using technologies with the aim of creating wealth had been an undercurrent before, this era saw the flourishing of an international (if nonindividual) capitalism as a central purpose for technology. It is really a set of wealth-creating technologies and techniques that distinguishes the Dutch commercial era.
Consider alongside his evaluation of Renaissance era technology. Does Misa apply Kuhn's methodology to technology?
Geographies of Industry
(59) Unprecedented growth in the cotton, iron, and coal industries during the decades surrounding 1800, culminating in the steam-powered factory system, powered a self-sustaining “take-off” in the British economy.
The First Industrial City: London
(65) Beer brewing affords a revealing window into industrial London while illustrating the links between industry and sanitation, consumption, and agriculture. . . . Reducing costs and increasing output – rather than enhancing quality, as in Dutch commerce – was the focus of technology in the industrial era.
(66) The competition between brewers to build ever-larger vats waned after 1814, however, when a 7,600-barrel vat at the Horse Shoe Brewery burst open and flooded the neighborhood, killing eight persons “by drowning, injury, poisoning by the porter fumes or drunkenness.”
An amusing fact.
(67) The porter brewers pioneered industrial scales of production and led the country in the capitalization of their enterprises.
(68) Brewers indirectly fixed a key term of measurement born in the industrial era, since Watt had the “strong drayhorses of London breweries” in mind when he defined “horsepower” at 33,000 foot-pounds per minute.
(69) These ancillary industries have not received the attention they deserve, for they are key to understanding how and why industrial changes became self-sustaining and cumulative.
Misa lays out opportunities for future scholarship, part of the value of this work.
(70) By the early nineteenth century perhaps half of all London pubs were tied to brewers through exclusive deliveries, financing, or leasing.
(73) By 1825 Maudslay and Bramah were among the London engineers hailed for their use of specialized machine tools to replace skilled handcraftsmanship.
Shock City: Manchester
(77) Early Arkwright machines were small, handcranked devices with just four spindles. The death blow to home spinning came when Arkwright restricted licenses for his water-frame patent to mills with 1,000 or more spindles. . . . Artkwright's mills – with their low wages and skills, their high-volume production of lower-grade goods, and their extensive mechanization – embodied core features of the industrial era.
In addition to ruthless protection of competitive advantage by restricting licenses: an early Microsoft?
(79) While the first generation of them had built textile machines and managed textile factories, the midcentury machine builders – the generation of London transplants – focused on designing, building, and selling machine tools.
(82) For Engels, Manchester was ground zero for the industrial revolution (he wrote specifically of “industriellen Umwälzung”).
(82) His real object was to shock his readers with visceral portraits of the city's horrible living conditions.
Horrible living conditions.
(83) Marx, with no firsthand industrial experience of his own, took Engels' description of Manchester as the paradigm of capitalist industry. Neither of them noticed a quite different mode of industry forming in Sheffield.
Region for Steel: Sheffield
(84) Sheffield was internationally known as a center for high-quality steel and high-priced steel products. . . . Not Manchester-style factories but networks of skilled workers typified Sheffield's industry.
Like the idealized network of small businesses? But then corrupted by scale. Nice to see remediated in Wired magazine stories.
(86) It is crucial to understand that the factory system so important in Manchester was absent in Sheffield.
(87) Some firms did nothing but coordinate such “hire-work” and market the finished goods, at home or overseas. These firms had the advantages of low capital, quick turnover, and the flexibility to “pick and choose to fit things in with whatever you were doing.”
(87-88) In the latter part of the nineteenth century these large steel mills and oversize forging shops symbolized a second generation of Sheffield's heavy industry.
(91) Steam not only directly killed many grinders, through dangerous working conditions, but also indirectly brought the deaths of many who crammed themselves and their families into the poorest central districts of industrial cities.
The indirect danger of steam technology. Would realization of this kill bourgeois interest in Steampunk?
(91) Sheffield's dire sanitary conditions resembled those of London or Manchester for much the same reason: the city's densely packed population lacked clean water.
(92) The geographies of industry surveyed in this chapter – multidimensional urban networks in London, factory systems in Manchester, and sector-specific regional networks in Sheffield – clinch the argument that there were many “paths” to the industrial revolution.
(93) Workers in steam-driven occupations, whether in London, Manchester, Sheffield, or the surrounding regions, were less likely to be in the country, to eat fresh food, to drink clean water, and (especially if female) to be skilled and have reasonable wages.
Instruments of Empire
(97) To a striking extent, inventors, engineers, traders, financiers, and government officials turned their attention from blast furnaces and textile factories at home to steamships, telegraphs, and railway lines for the colonies.
Steam and Opium
(101) Accurately mapping the Ganges in the latter eighteenth century had been a necessary first step in transforming the vague territorial boundaries assumed by the company into a well-defined colonial state. To this end one could say that the first imperial technology deployed on the Ganges was James Rennell's detailed Map of Hindoostan.
(102-103) The opium war began when China took determined steps to ban the importation of the destructive substance, and the British government, acting on the demand of Britain's sixty trading firms with business in China, insisted on maintaining free trade in opium and dispatched a fleet to China to make good its demands.
Telegraphs and Public Works
(104) In the industrializing countries of Western Europe and North America, telegraph systems grew up alongside railroads. Telegraph lines literally followed railway lines, since telegraph companies typically erected their poles in railroad right-of-ways.
(105) Telegraph lines were so important for imperial communication that in India they were built in advance of railway lines.
(107) Quick use of the telegraph saved not merely the British in Punjab but arguably the rest of British India as well. Most dramatic was that the telegraph made possible a massive troop movement targeted at the most serious sites of rebellion.
(108-109) By the time of the 1857 Mutiny, British rule in India had become dependent on telegraphs, steamships, roads, and irrigation works; soon to come was an expanded campaign of railway building prompted by the Mutiny itself. . . . The colonial government in India had no choice but to begin large-scale educational programs to train native technicians.
(113) (Fig 4.4 World Leaders in Railways, 1899.)
Interesting graph for 1899 almost looks like USA today shaving graph.
(127) Even today one can discern a shadow of the imperialist era in railroad maps of North America (look carefully at Canada, the western United States, and Mexico), in the prestige structure of technical education, and in the policy preferences of the orthodox development agencies in the United States and Europe.
(127) In this respect, we can see that imperialism was not merely a continuation of the eras of commerce and industry; rather, to a significant extent, imperialism competed with and in some circumstances displaced industry as the primary focus of technologists.
Science and Systems
(128) By transforming curiosities of the laboratory into consumer products, through product innovation and energetic marketing schemes, science-based industry helped create a mass consumer society. A related development was the rise of corporate industry and its new relationships with research universities and government bureaus.
(129) In these same decades technology took on its present-day meaning as a set of devices, a complex of industry, and an abstract society-changing force in itself.
Important for our definition of technology.
The Business of Science
(130) The chemical structures of these early dyes were unknown at the time. It was German chemists – based in universities and with close ties to industry – who deciphered their chemical structures and set the stage for a science-based industry.
(133) “Mass production methods which dominate modern economic life have also penetrated experimental science,” the chemist Emil Fischer state in his Nobel Prize lecture in 1902. “Consequently the progress of science today is not so much determined by brilliant achievements of individual workers, but rather by the planned collaboration of many observers.” Duisberg put the same point more succinctly: “Nowhere any trace of a flash of genius.”
(134) In World War I, popularly known as the chemist's war, chemists were directly involved in poison gas manufacture.
(135) The entanglement of the German chemical industry with the Third Reich also has much to do with the system-stabilizing innovation and the corporate and political forms needed for its perpetuation. . . . With all these heavy investments, Farben's executives felt they had little choice but to conform with Hitler's mad agenda after he seized power in 1933. Not Nazis themselves – one-forth of the top-level supervisory board were Jews, until the Aryanization laws of 1938 – they nevertheless became complicit in the murderous regime.
Flashes of Genius
(136) The singular career of Thomas Edison aptly illustrates the subtle but profound difference separating system-originating inventions from system-stabilizing ones.
(139) Edison wanted his electric lighting system to be cost competitive with gas lighting and knew that the direct-current system he envisioned was viable only in a densely populated urban center. Using Ohm's and Joule's laws of electricity allowed Upton and Edison to achieve these techno-economic goals.
(140) When Edison tested his system in January 1881 he used a 16-candlepower bulb at 104 volts, with resistance of 114 ohms and current of 0.9 amps. The U.S. standard of 110 volts thus has its roots in Edison's precedent-setting early systems.
Battle of the Systems
(143) Edison was wary of the energy losses of transformers, the high capital costs of building large AC stations, and the difficulties of finding insulators that could safely handle 1,000 volts.
(143) Arc lighting for streets, AC incandescent systems for smaller towns, AC motors for factories, and the pell-mell world of street railways were among the lucrative fields that Edison's diagnosis overlooked.
(144) In the mid-1880s Thomson turned his inventive efforts on incandescent lighting and AC systems. His other notable inventions include electric welding, street railway components, improved transformers, watt meters, and induction motors. These inventions were among the necessary technical components of the universal system of the 1890s.
Tenders of Technological Systems
(148) Edison fought it, Thomson denied it, and Insull embraced it: a new pattern of technological change focused on stabilizing large-scale systems rather than inventing wholly new ones.
(148) Industrial scientists and science-based engineers stabilized the large systems by striving to fit into them and, most importantly, by solving technical problems deemed crucial to their orderly expansion. Neither of these professions existed in anything like their modern form as recently as 1870.
(150) Industrial research became a source of competitive advantage for the largest firms, including General Electric, AT&T, and General Motors. . . . Independent inventors, formerly the nations leading source of new technology, either were squeezed out of promising market areas targeted by the large science-based firms or went to work for them solving problems of the companies' choosing.
(151) The industrial orientation of electrical engineering at MIT from around 1900 into the 1930s contrasts markedly with its more scientific and military orientation during and after the Second World War.
(155) Hazen's work on the “network analyzer” began with his 1924 bachelor's thesis under Vannevar Bush. Bush, a pioneer in analog computing, was working for [Dugald] Jackson's consulting firm studying the Pennsylvania-based Superpower scheme. . . . By 1929 the measuring problems were solved and GE's Doherty approved the building of a full-scale network analyzer.
(155) Built jointly by GE and MIT and physically located in the third-floor research laboratory in MIT's Building 10, the network analyzer was capable of simulating systems of great complexity.
(156-157) Synthetic dyes, poison gases, DC light bulbs, AC systems, and analog computers such as Hazen's network analyzer constituted distinctive artifacts of the science-and-systems era. . . . The most important pattern was the underlying sociotechnical innovations of research laboratories, patent litigation, and the capital-intensive corporations of science-based industry.
(157) A neat contrast can be made of the British cotton-textile industry that typified the first industrial revolution and the German synthetic dye industry and American electrical industry that together typified the second.
(157) The presence of the financiers, corporations, chemists, and engineers produced a new mode of technical innovation and not coincidentally a new direction in social and cultural innovation. The system-stabilizing mode of technical innovation - “nowhere any trace of a flash of genius” - was actively sought by financiers. . . . The system-stabilizing innovations, with the heavyweights of industry and finance behind them also created new mass-consumer markets for electricity, telephones, automobiles, household appliances, home furnishings radios, and much else.
Materials of Modernism
(158) Modernism in art and architecture during the first half of the twentieth century can be best understood as a wide-ranging aesthetic movement, floated on the deeper currents of social and economic modernization driven by the science-and-systems technologies.
Materials for Modernism
(160) The materials that modernists deemed expressive of the new era – steel, glass, and concrete – were not new.
(163) Glass through most of the nineteenth century was in several ways similar to steel before Bessemer. It was an enormously useful material whose manufacture required much fuel and many hours of skilled labor and whose application was limited by its high cost.
Manifestos of Modernity
(168) Critical to the development of the modern architectural style were the interactions among three groups: the Futurists in Italy, who gave modernism an enthusiastic technology-centered worldview; the members of de Stijl in the Netherlands, who articulated an aesthetic for modern materials; and the synthesis of theory and practice in the Bauhaus in Germany.
(171) Marinetti's provocative avant-garde stance, frank celebration of violence, and crypto-revolutionary polemics landed the Futurists squarely in the middle of postwar fascism.
(173) The task of the artist was to derive a style – or universal collective manner of expression – that took into account the artistic consequences of modern science and technology.
(176) The durable contribution of de Stijl, then, was not merely to assert, as the Futurists had done, that modern materials had artistic consequences, but to identify specific consequences and embed these in an overarching aesthetic theory.
Ironies of Modernism
(184-185) The Stuttgart exposition of 1927 was the first salvo in a wide-ranging campaign to frame a certain interpretation of modernism. It was to be rational, technological, and progressive; historical references and ornamentation were strictly forbidden. In 1932, the Museum of Modern Art in New York gave top billing to its “International Style” show, which displayed and canonized the preponderantly European works representing this strain of modernist architecture. . . . The influential teaching of Bauhaus exiles Gropius, Moholy-Nagy, and Mies van der Rohe in Boston and Chicago raised a generation of U.S.-trained architects and designers who imbibed the modern movement directly from its masters. In the 1950s, in architecture at least, the International Style, or Modern Movement, became a well-entrenched orthodoxy.
(186) The German government agency charged with rationalizing workshops and factories also worked closely with several women's groups to rationalize the household.
(189) In examining how “technology changes culture” we see that social actors, often asserting a technological fundamentalism that resonates deeply in the culture, actively work to create aesthetic theories, exemplary artifacts, pertinent educational ventures, and broader social and political movements that embed their views in the wider society.
Misa focuses on what Manovich calls cultural conventions, saying little even in the final chapters of technological aesthetics that Manovich attributes to the conventions of software.
The Means of Destruction
(190) No force in the twentieth century had a greater influence in defining and shaping technology than the military. . . . Lamenting the decline of classic profit-maximizing capitalism, industrial engineer Seymour Melman termed the new economic arrangement as contract-maximizing “Pentagon capitalism.” During these years of two world wars and the Cold War, the technology priorities of the United States, the Soviet Union, and France, and to a lesser extent England, China, and Germany, were in varied ways oriented to the “means of destruction.”
(191) Such promising technologies as solar power, analog computers, and machinist-controlled computer machine tools languished when (for various reasons) the military back rival technical options – nuclear power, digital computers, and computer controlled devices of many types – that consequently became the dominant designs in their fields.
An interesting position on technological determinism.
A War of Innovation
(192) It may seem odd to distinguish between the two world wars, linked as they were by politics and economics, but in technology the First World War was not so much a war of innovation as one of mass production.
(193) Not merely a military tactic, blitzkrieg was more fundamentally a “strategic synthesis” that played to the strength of Germany's superior mobility technologies, especially aircraft and tanks, while avoiding the economic strain and social turmoil of a sustained mobilization.
(195) Germany had neither the enriched uranium, the atomic physicists, nor the governmental resources to manufacture an atomic bomb.
“Turning the Whole Country into a Factory”
(195-196) If the First World War is known as the chemists' war owing to military use of synthetic explosives and poison gases, it was the Manhattan Project that denominated the Second World War as the physicists' war. . . . In reality, Los Alamos served as the R&D center and assembly site for the bombs. The far greater part of the project was elsewhere, at two mammoth, top-secret factory complexes in Tennessee and Washington State.
(196) After several governmental committees considered its prospects, the project came to rest in the Office of Scientific Research and Development, or OSRD, a new government agency headed by MIT engineer Vannevar Bush.
Bush who get so much attention in digital media studies.
(197) Although the point is not frequently emphasized, it was entirely fitting that Roosevelt assigned the construction phase of the bomb project to the Army Corps of Engineers and that the Army assigned command over the Manhattan Engineering District to Brigadier General Leslie Groves, who had been the officer in charge of building the Pentagon complex.
(198) The crucial task at Oak Ridge was to produce enough enriched uranium, somewhere between 2 and 100 kilograms, no one knew precisely how much, to make a bomb.
(204) Many commentators, even Eisenhower and Churchill, miss the crucial point that the two bombs dropped on Japan were technologically quite distinct: the Hiroshima bomb used Oak Ridge's uranium while the Nagasaki bomb used Hanford's plutonium.
(206-207) One hesitates to put it this way, but the two bombs dropped on Japan appear to have been “aimed” also at the U.S. Congress. After all, there were two hugely expensive factories that needed justification. . . . Bohr's observation that the atomic project would transform “the whole country into a factory,” true enough in the obvious physical and organizational sense, may also be insightful in a moral sense as well.
(208) Nautilus, it turned out, was a precedent for more than just the U.S. Navy, which in time fully matched the other military branches with its nuclear-powered submarines capable of launching nuclear missiles.
(210) The enduring legacy of the Manhattan Project above and beyond its contribution to the atomic power effort was its creation of a nuclear weapons complex that framed years of bitter competition between the United States and the Soviet Union.
(210) The cost from 1940 to 1986 of the U.S. nuclear arsenal is estimated at $5.5 trillion. No one knows the fair dollar cost of the former Soviet Union's nuclear arsenal, but its currently crumbling state – nuclear technicians have in effect been told to find work elsewhere, while security over uranium and plutonium stocks is appallingly lax – constitutes arguably the foremost danger facing the planet today.
Command and Control: Solid-State Electronics
(211) Yet, together, the massive wartime efforts on radar, proximity fuzes, and solid-fuel rockets rivaled the atom bomb in cost. . . . Even as its radar aided the Allied war effort, the Rad Lab [Radiation Laboratory at MIT] sowed the seeds for three classic elements of the Cold War military-industrial-university complex: digital electronic computing, high-performance solid-state electronics, and mission-oriented contract research.
(211-212) Vacuum tubes were sensitive only to lower frequency signals, so when the radar project's leaders decided to concentrate on the microwave frequency (3,000 to 30,000 megahertz), they needed an electronic detector that could work in these very high frequencies. . . . Much of the solid-state physics done during the war, then, focused on understanding these semiconductor materials and devising ways to purify them.
(213) In the transistor story, as in that of the Shippingport nuclear reactor, we see how the tension between military and commercial imperatives shaped the emergence of a technology that today is fundamental to our society.
(214) Indeed, instead of classifying transistors, the armed services assertively publicized military uses for them. . . . Each [Bell System] licensee brought home a two-volume textbook incorporating material from the first symposium. The two volumes, composing Transistor Technology, became known as the bible of the industry. They were originally classified by the government as “restricted” but were declassified in 1953. . . . A third volume in the textbook series Transistor Technology resulted from a Bell symposium held January 1956 to publicize its newly invented diffused base transistor. . . . For several years Bell sold these high-performance diffused transistors only to the military services.
(215) The Army Signal Corps also steered the transistor field through its “engineering development” program, which carried prototypes to the point where they could be manufactured.
(215) Bell Laboratories had not forgotten its telephone system, but its commercial applications of transistors were squeezed out by several large high-priority military projects.
(216) The integrated circuit was also to a large degree a military creation.
(216-217) Across the 1950s and 1960s, then, the military not only accelerated development in solid-state electronics but also gave structure to the industry, in part by encouraging a wide dissemination of (certain types of) transistor technology and also by helping set industrywide standards. . . . These competing demands probably delayed the large-scale application of transistors to the telephone system at least a half-dozen years (from 1955 to the early 1960s).
Command and Control: Digital Computing
(217) Code-breaking, artillery range-finding, nuclear weapons designing, aircraft and missile controlling, and antimissile warning were among the leading military projects that shaped digital computing in its formative years, from the 1940s through the 1960s.
Impact of military agenda on digital computing.
(219) Forrester wanted Whirlwind to become another megaproject like the Radiation Laboratory or Manhattan Project.
(221) At the center of this fantastic scheme was Forrester's Whirlwind, or more precisely fifty-six of his machines. . . . With participation in SAGE, IBM gained a healthy stream of revenues totaling $500 million across the project's duration. Fully half of IBM's domestic electronic data-processing revenues in the 1950s came from just two military projects: SAGE and the “Bomb-Nav” analog computer for the B-52 bomber.
(221) As important as this revenue stream was the unparalleled exposure to state-of-the-art computing concepts and the unconstrained military budgets that permitted the realization of those concepts.
(222) Even though the commercial success of IBM's System 360 made computing a much more mainstream activity, the military retained its pronounced presence in computer science throughout the 1960s and beyond. . . . The IPTO [Pentagon's Advanced Research Project Agency Information Processing Techniques Office] was far and away the nation's largest funder of advanced computer science from its founding in 1962 through the early 1980s. . . . Among the fundamental advances in and applications of computer science funded by the IPTO were time-sharing, interactive computer graphics, and artificial intelligence. J.C.R. Licklider, head of the IPTO program in the early 1960s, also initiated work on computer networking that led, after many twists and turns, to the Internet.
Bush, Licklider, Engelbart.
(223) A 1964 RAND Corporation report, “On Distributed Communications,” proposed the theoretical grounds for a rugged, bombproof network using “message blocks” - later known as “packet switching” - to build a distributed communications system. . . . These concepts became the conceptual core of the Internet.
(223) Through the military-dominated era there was an unsettling tension between the West's individual-centered ideology and its state-centered technologies.
(224) Together, these military endeavors were not so much an “outside influence” on technology as an all-pervading environment that defined what the technical problems were, how they were to be addressed, and who would pay the bills. While closed-world, command-and-control technologies typified the military era, the post-Cold War era of globalization has generated more open-ended, consumer-oriented, and networked technologies.
Toward Global Culture
(227) Whatever the economic and political consequences of globalization, the threat of cultural homogenization concerns many observers.
(227) While mindful of the possibilities of convergence, I believe there is greater evidence for a contrary hypothesis.
(229) The “divergence hypothesis” is also consistent with what we have learned from earlier eras.
The Third Global Economy
(229) Our present-day global economy is not the first or second global economy we have examined in this book, but the third. The first was in the era of commerce.
(229) A second global economy developed in the 1860s and lasted until around the First World War, overlapping with the era of imperialism.
(231) Since around 1970 there has been a resurgence of global forces in the economy and in society, but who can say how long it will last.
Fax Machines and Global Governance
(232) One might say that in the United States the military market displaced the consumer market, while in postwar Japan it was the other way around. The structure of the global economy can in part be traced to the different paths taken by each nation's electronics industry.
(234) The CCITT, or Comite Consultatif International Telegraphique et Telephonique, was the leading international standards-setting body for all of telecommunications beginning in the 1950s. Its special strength was an remains standards setting by committee.
(235) It was CCITT's success with the 1980 standards that made facsimile into a global technology – and relocated the industry to Japan. . . . The achievement of worldwide standards, digital compression, and flexible handshaking, in combination with open access to public telephone systems, created a huge potential market for facsimile.
(236) This network of students and teachers, along with some journalists and government officials, is notable not only for creatively using fax technology but also for explicitly theorizing about their culture-making use of technology.
(236) The idea of using fax machines for building European identity and youth culture originated with the Education and Media Liaison Center of France's Ministry of Education, which was in the middle of a four-year project to boost public awareness of telematics and videotext. (France's famous Minitel system came out of this same context of state support for information technology.)
McWorld or McCurry?
(238) “McWorld” epitomizes the cultural homogenization and rampant Americanization denounced by many critics of globalization. “McDonaldization” refers to a broader process of the spread of predictability, calculability, and control – with the fast-food restaurant as the present-day paradigm of Max Weber's famous theory of rationalization.
(240) The presence of McDonald's in the conflict-torn Middle East is good news to Tom Friedman, the author of The Lexus and the Olive Tree (1999). In his spirited brief on behalf of globalization, Friedman frames the “golden arches theory of conflict prevention.”
(245) McDonald's corporate strategy of localization not only accommodates local initiatives and sensibilities but also, as the company is well aware, blunts the arguments of its critics.
(249) Overall, we can discern three phases in the Internet story: the early origins, from the 1960s to mid-1980s, when the military services were prominent; a transitional decade beginning in the 1980s, when the National Science Foundation became the principal government agency supporting the Internet; and the commercialization of the Internet in the 1990s, when the network itself was privatized and the World Wide Web came into being.
(250) The internet conception resulted from an intense collaboration between Vinton Cerf, a Stanford computer scientist who had helped devise the ARPANET protocols, and Robert Kahn, a program manager at ARPA. In 1973 they hit upon the key concepts – common host protocols within a network, special gateways between networks, and a common address space across the whole – and the following year published a now-classic paper, “A Protocol for Packet Network Intercommunication.” Although this paper is sometimes held up as embodying a singular Edisonian “eureka moment,” Cerf and Kahn worked very closely for years with an international networking group to test and refine their ideas.
(254) A good example of how the Internet gained its seemingly effortless “global” character is the so-called domain-name system, or DNS. . . . With the spread of the domain-name system, any single user can be addressed with on simple address. More important, the DNS established an address space that is massively expandable and yet can be effectively managed without any single center.
(255) The Web is, at least conceptually, nothing more than a sophisticated way of sending and receiving data files (text, image, sound, or video).
(257) From the start, Berners-Lee built in to the Web a set of global and universal values. These values were incorporated into the design at a very deep level.
(257) The second goal, dependent on achieving the first goal of human communication through shared knowledge, is that of machine-understandable information.
(258) These examples – worldwide financial flows, fax machines, McDonald's, and the Internet – taken together indicate that globalization is both a fact of contemporary life and a historical construction that emerged over time.
(259) Indeed, the certainty during the 1990s that globalization would continue and expand, seemingly without borders, ended with the attacks on 11 September 2001. Whatever one makes of the resulting “war on terrorism,” it seems inescapable that the nation-state is, contrary to the globalizers' utopian dreams, alive and thriving as never before. . . . A national security-oriented technological era may be in the offing. It would be strange indeed if the September 11th attackers – acting in the name of antimodern ideologies – because of the Western nations' national security-minded and state-centered reactions, brought an end to this phase of global modernity.
Misa suggests a post-globalization era resulting from the war on terror.
The Question of Technology
Science and Economics
(261) However, the centrality of science to technology is often overstated. Scientific theories had little to do with technological innovation during the eras of industry, commerce, and courts.
(263) Much of the frank resentment today aimed at the World Bank, International Monetary Fund, and World Trade Organization stems from their conceptual blindness to the negative aspects of technology in social and cultural change.
Variety and Culture
(265) A more subtle and yet more pervasive example of technology's interactions with the goals and aims of society resides in the process of technical change.
(267) Power does flow from the end of a gun; Europeans' deadly machine guns in the colonial wars proved that point. But there is an important dimension of power that resides in things, in the built world, and in the knowledge about that world that people have access to or are excluded from.
(267) The conceptual muddle surrounding these questions of technology transfer can be cleared up with Arnold Pacey's useful notion of “technology dialogue,” an interactive process which he finds is frequently present when technologies successfully cross cultural or social barriers.
Pacey. How about Feenberg?
Displacement and Change
(268) Displacement occurs when a set of technology decisions has the effect of displacing alternatives or precluding open discussion about alternatives in social development, cultural forms, or political arrangements.
(269) For roughly fifty years, a certain technical perspective on modern architecture displaced alternative, more eclectic approaches.
(269) Displacement, then, is how societies, through their decisions about technologies, orient themselves toward the future and, in a general way, direct themselves down certain social and cultural paths rather than other paths.
(270) Can technologies be used by nondominant actors to advance their alternative agendas?
(271) A second reason for looking closely at the technology-power nexus is the possibility that non-dominant groups in society will effectively mobilize technology.
(272) The new diagnosis coming from ecological modernization is that dealing effectively with the environmental crisis will require serious engagement with technology.
Disjunctions and Divisions
(273) Nevertheless, it is a mistake to follow the commonplace conviction that technology by itself “causes” change, because technology is not only a force for but also a product of social and cultural change.
Misa's main point, countering a naïve perspective of technological determinism. Also need to broaden understanding of how modern technology interacts with other cultures.
(274) This internal disjunction is compounded by the external division between the Moslem-Arab worldview and the Western worldview, made evident by the September 11th attacks.
(275) It is an especially pressing concern that scholars and citizens in the West know all too little about the details and dynamics of how modern technologies are interacting with traditional social forms. This is true not only for the Middle East, Asia, and Africa but also for native peoples in North and South America.
Misa, Thomas J. (2004). Leonardo to the internet: Technology & culture from the Renaissance to the present. Baltimore: Johns Hopkins University Press.
Misa, Thomas J. Leonardo to the Internet: Technology & Culture from the Renaissance to the Present. Baltimore: Johns Hopkins University Press, 2004. Print.