The Millennial Project 2.0
Register
Advertisement

The Open Computer Project is a special area of the Open Source Everything Project that is focused on the task of developing a complete Open Source computing platform including its hardware –something which has often been the subject of speculation in the Open Source software community but only tentatively explored. The chief stumbling-block to this notion has generally been the difficulty in fabrication of new microprocessors free from proprietary architecture. Microprocessors have defied the common trend of diversification among the rest of the digital devices in the industrial ecology of the computer and instead have consolidated into small numbers of proprietary chip families targeted at certain market niches which remain dependent upon a very small community of very large corporations for their development. Though in the past it was not unusual for this community of companies to aid the limited fabrication of experimental microprocessors in support of academic computer science research, there is no incentive for companies vying for monopoly to support a true Open Source microprocessor architecture. This, in turn, has created a major obstacle to real progress in the personal computer industry which, though still enjoying a rapid pace of incremental evolution, has become hopelessly stymied in evolution of basic architecture leading to a suppression of real innovation and radical progress.

The potential solution to this problem is an open and flexible computing architecture which reduces all the components of a computer, including the core processing systems, to what could be called ‘commodity’ hardware. Virtually all computer hardware is exceptionally generic, interchangeable, and manufactured by innumerable companies whose competition drives their innovation. They are ‘commodities’ in the sense that they have many producers but function more-or-less identically and can be used interchangeably, thus their price tends to be driven purely by supply and demand. But the key components that determine the core architecture of the computer and the machine language of software used –the microprocessors and motherboard integration chip families– have defied this evolution. They’ve evolved to become very specialized and proprietary in architecture and –by design or accident– have come to assume far too much influence over the course of evolution of the computer itself. Now, there’s no grand conspiracy going on here –though clearly this situation reflects some degree of collusion across the industry. It’s a plain fact that companies like Microsoft and Intel collaborate on the design of Intel chip families. But the real cause of this situation relates to the essential concepts of ‘Turing Machine’ theory and its central processing paradigm.

A Central Processing Unit imposes a specific architecture on a computer by virtue of the machine language it compels all software to use and the data word and instruction sequence protocols it requires, as embodied by external bus architectures, to communicate with the rest of a computing system. Thus the internal architecture of the CPU ‘spills out’ over the rest of the system. Once a computer designer adopts a specific microprocessor, he’s locked into conforming the rest of his design to its standards. Early on in the history of personal computers microprocessors had pretty generic function even if they had highly proprietary internal designs and exclusive machine languages. Computer designers had much more freedom to diversify the overall architecture of the computer –though support from software developers has always been hard-won. There were far more –and more different– computers early in the personal computer’s history than there are today. But as the personal computer grew in sophistication, collusion between microprocessor makers, motherboard makers, and operating system developers resulted in a progressive specialization of the microprocessor and its peripheral interface chips around very specific hardware platforms. Chip makers wanted to make it easier for computer designers to use their chips, but in the process made it harder for designers to deviate from the chip maker’s pre-conceptions about the form a computer should take. An industrial ecology favors competition between products but abhors competing ‘standards’ of interconnection, interoperability, and communication between them because this forces either costly redundancy in development and production to accommodate these varying standards or a sacrifice in market share. So the computer industry was compelled to lean in favor of whatever hardware platforms looked like they could command the greatest market share. This could have been countered by the development of a standardized machine language and microprocessor external interface but, arguing that this would impede progress and innovation (when the exact opposite was true…), computer industry leaders resisted attempts to create such standards. The notion of Open Source is a latecomer and even if it had existed early in the history of personal computing it’s unlikely the primitive executives of the era would have comprehended its virtues. After all, it took them decades to ‘get it’ after Linux first emerged. And, truth be told, the industrial ecology of the computer industry was quite accidental. To this day most people in the computer industry don’t have a very good grasp of how their industry works and how this effects the evolution of the technology. They can’t see the forest for the trees.

If the winner in the competition between microprocessors and their de-facto hardware platforms was the product of evolutionary optimization of performance and capability, as is the case with most components, then there would not be an architectural bottleneck in the computer industry today. But it’s not honest competition. It’s not apples-to-apples. It’s apples-to-oranges. The best performer isn’t the winner here. It’s whoever has the most ‘muscle’ in terms of coercion, collusion, and nepotism. So in the end the consumer isn’t necessarily getting the best product. A lot of options have been systematically locked out and he never knows what he’s missed. And the root of this problem is simply a microprocessor that isn’t sufficiently generic. What is needed then are new more generic computing architectures that can trade sophisticated hardware for sophisticated design using less sophisticated hardware to produce roughly the same performance.

Now, one might argue that, even if a few big corporations dominate microprocessor production, computers are still quite cheap and work well –more or less. Why do we need other computer platforms? Just the rift between Mac and PC is a nuisance in itself. The point here, though, is freedom. Freedom of design evolution, diversity of production methods, and personal choice. Personal computers may be very cheap today but we’ve reached a bottleneck in their evolution because of the architectural hegemony current CPU and platform architectures impose. For those who think the personal computer is ‘good enough’, one could argue that they just don’t have any idea of what they’re missing –just as the consumers of automobiles have no idea what they’re missing because their imagination is largely limited to what the market shows them. The personal computer is today theoretically capable of a great host of user interface metaphors, applications, and physical systems designs that even the most experienced ‘power user’ scarcely has the imagination to visualize and may never get to see from the industry as it is. After all, for 99% of all computer users everything they know about what a computer is and does is limited by what the market shows them and what marketing people tell them.

The notion that corporations only give people what they want is a myth because the corporation has no good mechanisms to divine what the customer really wants and the customer doesn’t really know what the full range of possibilities are and so can only choose from what’s offered. Their imagination is largely dictated by what already exists. So the market makes up their mind for them by offering them choices derived in some way from what have been previously made interspersed with the occasional experiment in what professional designers and executives imagine they ‘might’ want and will give a company a competitive edge. If such a notion for what the public might want is not consistent with the corporations’ own interests –usually defined by what production tools and facilities they still owe money on and how much money they are willing to speculate with–, it never gets made and the public never knows what they missed. (one can draw parallels here to how politics also generally works…) Again, there’s no grand conspiracy here. This is simply one of the inherent limitations of Industrial Age paradigms. But this isn’t trivial. This is how we ended up entering the 21st century driving cars with worse gas mileage than cars of the 1970s when, if efficiency, logic, performance, and convenience actually won-out in this equation, we’d all now be riding around in zero-emission vehicles with 150mph cruising speed that were safer than an airliner and could drive themselves. The computer is destined to go exactly the same way as the automobile if we can’t break this design evolution bottleneck. The key to that may be to turn the whole thing Open Source.

There are two concepts which have strong prospects for a much more generic computing platform suited to the ideals of the Open Computer Project. These are the Virtual Computer and the Distributed Computer. These concepts will be described in more detail in the Emerging Technologies section, but here we’ll offer a simple overview.

The Virtual Computer is an alternative to monolithic microprocessors with fixed machine languages that is based on the technology of a simple digital device that is already a ‘commodity’ product and which has much lower overhead to its production than the typical microprocessor. That device is the Field Programmable Gate Array. FPGAs are matrixes of reconfigurable logic gates whose function are assigned by data in a parallel companion static RAM. By loading sets of data into this RAM one can create virtual logic circuits for a specific task, and then change these circuits later as needed. FPGAs were developed as a means for communications devices operating in remote locations to be easily reconfigured to support changes in communications protocols. They have since become a powerful and ubiquitous tool for digital systems engineering, allowing custom circuits to be simulated and tested before being committed to chip production. Eventually some engineers began experimenting with these devices as coprocessors for computers, realizing that if they could reduce certain tasks to specific circuits that processed some large block of data in a single clock cycle rather than strings of alternating data and instructions one or a few ‘words’ per cycle they could radically outperform even some of the most powerful supercomputers in existence –at least for these simple tasks. This led to the notion of the Virtual Computer; a computer that uses collections of dynamically loaded virtual circuits to perform its operations instead of conventional serial software.

A Virtual Computer is a computer that uses a large set of programmable gate arrays instead of a CPU. Instead of programs designed as strings of instructions, its low level programs define logic circuits to perform their tasks which, once set, can process data by themselves and in parallel according to how much space the collective set of gate arrays has, doing everything in a single clock cycle. VCs can thus process data at extremely high speeds with what would, for typical microprocessors, be considered primitive clock rates because they do so much in one cycle. VCs theoretically defy the limitations of Amdahl’s Law, which states that you cannot double the performance of a computer by doubling the number of processors because you simultaneously double the amount of instruction communication. VCs only use instructions once in the setup of program circuits. After that there is no more instruction communication until a program needs to change its functions and so the performance bottleneck becomes the amount of circuit area and the amount of RAM those circuits can independently interface to, with this divided between conventional RAM banks and RAM built into the program circuits themselves and reliant on gate array space. Thus the larger the gate array and the more active circuits they host the greater performance in direct proportion, potentially without limit. This is an extremely powerful but simple technology that offers the potential of reducing the core hardware architecture of computers to easily evolvable software and very generic hardware based on cheap commodity components. Indeed, most of the ICs making up a contemporary computer could potentially be replaced by just a big array of these identical RAM-like gate array chips, reducing a whole computer to just a small variety of digital components.

Interest in VCs emerged in the late 1980s but little progress has been made since, partly because of the radically different nature of software engineering required but also because this was, even in a very nascent form, immediately threatening to the microprocessor hegemony. Thus the developers of FPGAs, who were typically microprocessor makers, were disinclined to make chips that could solve the one flaw in that device for this application; a lack of ability for gate array circuits to directly access the RAM that defines their circuits and thus bootstrap and dynamically self-reconfigure their systems. Without this feature, nascent VCs are dependent upon another conventional computer to load their circuit software, relegating them to the role of co-processors and hampering their potential to streamline hardware design. Since these FPGAs –and any future Homogeneous Processor Array device that might evolve from them– are pretty low-tech relative to the typical microprocessor, there is strong potential here for economical small-scale development and production, which would make it a strong contender for a potential Open Source computing hardware platform. And there is great room for improvement. FPGAs are still not engineered to anywhere near as great a performance and cell density as simple RAM chips while having comparable engineering/fabrication complexity. In a competitive development environment, we could easily see the realization of low cost HPA chips with conventional clock speeds and vast cell capacities resulting in performance so far ahead of the cutting edge of contemporary microprocessors that it would simply be no contest. VC researchers have been predicting this for a decade. They just never accounted for the power of the hegemony they would have to fight their way through.

The other concept –the Distributed Computer– seeks to address the design hegemony in personal computing at a higher level by effectively changing what we consider to be the physical composition of the personal computer as a finished product, replacing a monolithic hardware paradigm with a more open, modular, and free-form concept. The physical design of the personal computer hasn’t really changed very much since it was introduced in the 1970s. Sure, case styles are more fanciful than ever, video displays have gone flat, and the basic hardware of a computer can now be fit in a package small enough to fit in a pocket but, ultimately, there has been no real improvement in the ergonomic performance of the computer from this. The only actual innovations in personal computer ergonomics since its introduction are the mouse and –still scarce– touch screens and wearable displays. And, again, much of this relates to the way proprietary microprocessors and hardware platforms have increasingly limited one’s options in design.

Contemporary personal computer design tends toward a motherboard-centric all-in-one box Swiss Army knife paradigm that has dominated personal computer design since their introduction. Early-on, this strategy seemed quite logical since the point to the personal computer was miniaturization and so it made sense to collect the various subsystems of earlier computer systems into one convenient package that a computer-illiterate public could treat as a ‘black box’ appliance whose internals could remain hidden and as irrelevant to the average computer user as a car’s engine is to the average driver. Some personal computer executives, such as Steve Jobs, were especially wedded to this notion of the computer as all-inclusive digital toaster and Apple has produced many designs reflecting this in the extreme. For a time this company, which has long prided itself on its friendly progressive image, was in constant war with its own customers over access to their computers internals! But in practice the concept has never been particularly practical. Computer companies rarely track the history of their machines after they sell them, but if they did more of them might realize that the computer does not represent so much an appliance as an environment whose users are compelled over time to customize as they customize their homes or apartments. The computer is not a discrete device. It is, in fact, an extension of our habitat. And today’s serious computer users are as likely to build them from parts for themselves as they are to buy one ready-made, customizing them from the start. Yet, somehow, computer manufacturers still think it sensible to maintain this hegemony on the primary form factor of the computer –it’s core unit as embodied by the desktop box or, at least, the all-inclusive motherboard. Perhaps they simply lack the imagination to envision any other option, or realize that this remains a last bastion of market control. Thus the most radical departures from conventional computer design we ever see today are wearable computers and novel computer cases shaped like girl Anime characters or steam-punk contraptions.

This hegemony on design isn’t entirely intentional. It’s largely the legacy of the way the personal computer industry itself evolved and the way our culture perceives information technology. But it’s never been a particularly good fit from an ergonomic standpoint. There’s a fundamental –sometimes plain, sometimes subtle– difference between the way we use information and media in general and the way we are compelled to use them through the computer which hampers convergence and prevents it from fully integrating into our daily lives and casual activity. It’s always somehow a ‘special’ artifact needing to be handled in special ways, manipulated in a special posture, given special spaces, and requiring special knowledge to properly use. Thus it has never quite been as convenient and flexible a device as it could be. And it also doesn’t well fit the new paradigms the Internet has introduced to our culture’s notions of information use. We live in an increasingly networked world and yet our personal computers are still fundamentally designed as if they came from a pre-network era.

The concept of the Distributed Computer addresses this situation with the simple suggestion that a computer should be treated as an environment rather than a single device which is allowed to self-organize around its constituent subsystems. Inside the box of the typical computer is actually a collection of fairly independent subsystems which have only been placed in one package because, in the past, that was considered more convenient. But in a networked world it’s not. Today that notion represents architectural stratification and thus a path to imminent forced premature obsolescence –as tends to occur every time some company introduces a new version of a microprocessor, a new or radically denser form of mass storage, or a new and more bloated version of an operating system. By allowing these subsystems independence from the motherboard, it becomes possible to distribute their development as end-user products among competitors in the same way development is distributed across the rest of the industrial ecology of computers, allowing them to evolve with some greater degree of independence from each other and allowing the personal computer as a whole to be ‘brand-independent’ and freely adaptable to many more innovations, design possibilities, and many more end-user situations without the whole system being forced into obsolescence at random by some company’s whim.

The Distributed Computer would achieve this by basically dividing the personal computer into a collection of largely self-contained and specialized modular network appliances which use a local area network as a system bus –one type for each of the major subsystems in a computer but with potentially any number of any one type usable simultaneously. Each of these devices would contain enough firmware based intelligence that they could do their job largely without the direct control of any centralized operating system –much like the specialized ‘servers’ on a network– and would self-integrate into a local computer’s ‘system domain’ as soon as switched-on and without much intervention from the user. Simply bringing one of these modules into the range of the system domain or plugging it into the home/office LAN would be sufficient to add it to the rest of the system. They would alternately be based on wired and wireless networking, with more performance-critical links likely favoring wired connections but being able to use either where necessary. The basic set of module types would be; processors, mass storage devices, network switch/gateway units, assorted specialized peripherals (like scanners, printers, media converters, and special purpose co-processors), and Personal Access Devices or PADs. Any number of these devices could be added to the system domain to increase performance, capacity, or capability, be they within the same home or office or distributed long distance across WANs and VPNs. And it wouldn’t matter much where they were located relative to one another. Instead of the usual desktop object d’art of today’s space-wasting PCs, many of these components might be relegated to cabinets, closets, and basements.

PADs would be the most critical of components of the Distributed Computer because these are responsible for the user interface. But unlike the traditional personal computer, they would be allowed to freely diversify in form to accommodate the varying ergonomics of different uses, activities, and settings instead of trying to be all-encompassing. Some might look like an extremely simple laptop or a flat panel monitor with a companion keyboard. Others might take the form of hand-held or table-integrated tablets. Others might assume the role of a television using large area wall mounted displays. Some might be based on a wearable display and chord keyboard or voice input. Some might be nothing more than a speaker and microphone array or a wearable telephone headset relying on voice and audio only or integrating both audio and video interfacting. Some might take the form of streamlined cell phones, PDAs, or iPod-like devices that can be carried in a pocket. And others may take the form intelligent jewelry, watches, toys, personal robots, home appliances, and tools. Some PADs would also be designed for more portability and independence, having somewhat greater internal intelligence and local storage capacity so that they can revert to the role of a portable computer and perform some functions in complete isolation of the rest of the system domain in the event of network connection loss. This would be particularly common for devices with form factors akin to common portable computers and PDAs which would have the benefits of both portable off-line use and seamless integration when one returns homes, though it would come with added cost. Here we see a diversity of potential user interface that is simply impossible for contemporary personal computers without a lot of complicated contrivance –a clear example of how today’s computer users really don’t know what they’re missing.

The Distributed Computer, by definition, favors something like Open Source development for both hardware and software since in order to support open and independent development of its modular components it must establish open communications and system software standards for them all. And in combination with the Virtual Computer as an alternative to conventional microprocessors it could produce a quantum leap in personal computing power and capability. But it’s also clear this concept could be very threatening to existing computer manufacturers because it basically allows their OEM suppliers to obsolesce them outright and, with little change to their normal development and production practices, market products directly to the customer rather then feeding them to a higher tier on the industrial food chain. Current peripheral makers are fully capable of stepping into such product development without much difficulty and most OEM suppliers already sell to a retail market as well –albeit composed of more technically savvy consumers. And if the OS is Open Source too, well, what would be left for the likes of Apple and Microsoft –the very last of the old guard of personal computer developers– to brand and control? Resistance –or at least calculated indifference– is likely. But once a system with this kind of radically open architecture is demonstrated and its economic advantages to the overall industry made obvious, nature should take its course.

The Open Computer Project would also have important ramifications for all later technical work in TMP, providing a uniform, flexible, license-free computing platform for use in all its activities. Distributed Computer architectures will be very helpful in the context of Sensor Nets, Control Webs, and Distributed Awareness systems, as will be discussed later. And development of Virtual Computer and Distributed Computer systems would also have significant impact on space settlement development. Space settlements will, because of small community size and limited facility scale, need to cope with more limited industrial diversity than is possible on Earth –even with the many benefits offered by Post-Industrial fabrication methods by the time of concerted space settlement. The notion of Min-A-Max –maximum diversity of applications from a minimum diversity of components– will long be the predominate paradigm of in-space industrial design. And so the computers a space settlement must rely on will favor architectures where largely homogenous and generic hardware, producible by small systems, can support very large and sophisticated diversities of application. Competitive proprietary computing platforms simply will not fly in that environment.

Parent Topic[]

Peer Topics[]

Phases[]

d v e FOUNDATION
Phases Foundation Aquarius Bifrost Asgard Avalon Elysium Solaria Galactia
Cultural Evolution Transhumanism  •  Economics, Justice, and Government  •  Key Disruptive Technologies
References
Foundation Community Network
Foundation Promotional Effort Community OutreachTMP 2 LecturesFoundation Convention CircuitStar FestivalsFoundation Fairs
On-Line Community Program Virtual Habitat Program
Open Source Everything Project Utilimobile ProjectUtilihab ProjectOpen Energy ProjectOpen Computer ProjectOpen Space ProjectOpen Fabber ProjectFIY - Fab-It-Yourself SeriesEnclosure ProfilesOSbot
Open Courseware Network
Foundation Media
TMP Media Gallery ProjectTMP 2.0 Book ProjectTMP Film/Video Projects
M3 Game ProjectTMP Model SeriesFuture Fair/Museum of Tomorrow
Foundation CIC
Portfolio Development Project
GreenStar Properties
GreenStar Securities
GreenStar Credit Union
GreenStar Community Cooperative CulturalEducationMedicalReliefSecurity
GreenStar Ventures
GreenStar Industrial Cooperative Foundation MediaAerospaceAgroSystemsBiotechCodeworksConstructionDigiTechEnergyMaterialsModular Building ProductsPharmaTechRoboticsTelecommunicationsToolworksTransitTransportation SystemsAquarian BountyAquarian Marine ResourcesAsgard Mining CooperativeAsgard Orbital Services
GreenStar Free-Market Exchange Network
GreenStar Science Research Cooperative
Foundation Communities
Eco-Community Design Concepts eVillageOrganicaSeaBox VillageSolar CircleTectonicTerra Firma
GreenStar Resorts
Arcology Earth
Advertisement