Wednesday, December 1, 2010

Windows 95

Windows 95

Windows 95 Logo
After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system code-named Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance, and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors eventually began to impact the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first, it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS; secondly, although traces of DOS were never completely removed from the system and a version of DOS would be loaded briefly as a part of the booting process, Windows 95 applications ran solely in 386 Enhanced Mode, with a flat 32-bit address space and virtual memory. These features make it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2GB reserved for the operating system), and in theory prevented them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/ME did not support more than 512 megabytes of physical RAM without obscure system tweaks.
IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped preinstalled with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share.
It is probably impossible to choose one specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM later introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part.
Microsoft went on to release five different versions of Windows 95:

Computer virus

A computer virus is a computer program that can copy itself[1] and infect a computer. The term "virus" is also commonly but erroneously used to refer to other types of malware, including but not limited to adware and spyware programs that do not have the reproductive ability. A true virus can spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive.[2]
Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.[3][4]
As stated above, the term "computer virus" is sometimes used as a catch-all phrase to include all types of malware, even those that do not have the reproductive ability. Malware includes computer viruses, computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a computer system's data or performance. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing beyond reproducing themselves.

The first academic work on the theory of computer viruses (although the term "computer virus" was not invented at that time) was done by John von Neumann in 1949 who held lectures at the University of Illinois about the "Theory and Organization of Complicated Automata". The work of von Neumann was later published as the "Theory of self-reproducing automata".[5] In his essay von Neumann postulated that a computer program could reproduce.
In 1972 Veith Risak published his article "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (Self-reproducing automata with minimal information exchange).[6] The article describes a fully functional virus written in assembler language for a SIEMENS 4004/35 computer system.
In 1980 Jürgen Kraus wrote his diplom thesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at the University of Dortmund.[7] In his work Kraus postulated that computer programs can behave in a way similar to biological viruses.
In 1984 Fred Cohen from the University of Southern California wrote his paper "Computer Viruses - Theory and Experiments".[8] It was the first paper to explicitly call a self-reproducing program a "virus"; a term introduced by his mentor Leonard Adleman.
An article that describes "useful virus functionalities" was published by J.B. Gunn under the title "Use of virus functions to provide a virtual APL interpreter under user control" in 1984.[9]

Science Fiction

The Terminal Man, a science fiction novel by Michael Crichton (1972), told (as a sideline story) of a computer with telephone modem dialing capability, which had been programmed to randomly dial phone numbers until it hit a modem that is answered by another computer. It then attempted to program the answering computer with its own program, so that the second computer would also begin dialing random numbers, in search of yet another computer to program. The program is assumed to spread exponentially through susceptible computers.
The actual term 'virus' was first used in David Gerrold's 1972 novel, When HARLIE Was One. In that novel, a sentient computer named HARLIE writes viral software to retrieve damaging personal information from other computers to blackmail the man who wants to turn him off.

Virus programs

The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early 1970s.[10] Creeper was an experimental self-replicating program written by Bob Thomas at BBN Technologies in 1971.[11] Creeper used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system.[12] Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'm the creeper, catch me if you can!" was displayed. The Reaper program was created to delete Creeper.[13]
A program called "Elk Cloner" was the first computer virus to appear "in the wild" — that is, outside the single computer or lab where it was created.[14] Written in 1981 by Richard Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk.[14][15] This virus, created as a practical joke when Skrenta was still in high school, was injected in a game on a floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the computer and displaying a short poem beginning "Elk Cloner: The program with a personality."
The first PC virus in the wild was a boot sector virus dubbed (c)Brain,[16] created in 1986 by the Farooq Alvi Brothers in Lahore, Pakistan, reportedly to deter piracy of the software they had written.[17]
Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. PCs of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the wild for many years.[1]
Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in BBS, modem use, and software sharing. Bulletin board-driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBS's.[citation needed]
Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Word and Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected e-mail, those viruses which did take advantage of the Microsoft Outlook COM interface.[citation needed]
Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents".[18]
A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.
Viruses that spread using cross-site scripting were first reported in 2002,[19] and were academically demonstrated in 2005.[20] There have been multiple instances of the cross-site scripting viruses in the wild, exploiting websites such as MySpace and Yahoo.

Infection strategies

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user attempts to launch an infected program, the virus' code may be executed simultaneously. Viruses can be divided into two types based on their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect those targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.

Nonresident viruses

Nonresident viruses can be thought of as consisting of a finder module and a replication module. The finder module is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the replication module to infect that file.

Resident viruses

Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. This module, however, is not called by a finder module. The virus loads the replication module into memory when it is executed instead and ensures that this module is executed each time the operating system is called to perform a certain operation. The replication module can be called, for example, each time the operating system executes a file. In this case the virus infects every suitable program that is executed on the computer.
Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. A fast infector, for instance, can infect every potential host file that is accessed. This poses a special problem when using anti-virus software, since a virus scanner will access every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is present in memory the virus can "piggy-back" on the virus scanner and in this way infect all files that are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this method is that infecting many files may make detection more likely, because the virus may slow down a computer or perform many suspicious actions that can be noticed by anti-virus software. Slow infectors, on the other hand, are designed to infect hosts infrequently. Some slow infectors, for instance, only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably and will, at most, infrequently trigger anti-virus software that detects suspicious behavior by programs. The slow infector approach, however, does not seem very successful.

computer ups

m Cincinnati to Columbus, Ohio, and founded Capital Refrigeration Industries. The business specialized in applying refrigeration technology to emerging and unusual situations. The use of technology to find solutions to the unique concerns of business became the underlying purpose for the company as it grew.Among the special projects that Ralph Liebert was asked to assist with was building cooling systems for computer rooms. Recognizing the opportunity in this emerging market, he focused his efforts on this application and developed it into a major new business.
Using his garage as a workshop, Ralph Liebert built the first prototype precision air conditioner in 1964. He designed a self-contained system that could precisely control air temperature, humidity and cleanliness in a computer room to tolerances of +/- 1 degree Fahrenheit and +/- 2.5 percent relative humidity. It also had the high sensible heat ratio ideally suited for cooling electronics, the ability to run continuously and built-in redundancy -- features essential for data processing operations.
He previewed the prototype for IBM and the computer company made arrangements for Liebert to present the new system at the World Computer Conference in Philadelphia in 1965. Show attendees were excited by the idea and the orders flowed in.
Liebert Corporation was founded on this air control product that same year with just five associates.
Just two years later, in 1967, Liebert made its first international move by setting up a Canadian distributor network. The company has been expanding and finding new markets ever since.
In 1968, the company moved into a new facility in Worthington, Ohio. As its business continued to grow, the company broke ground once again and moved into a 150,000 square foot facility in Columbus, Ohio. This building, expanded several times to add manufacturing space, continues to operate as a precision air production facility and the corporate headquarters.
Moving into power
In the 1970s, the data processing industry was faced with inconsistent power problems, which led to downtime, scrambled data and equipment damage. In response, Liebert formed Conditioned Power Corporation to provide economical, flexible, mobile and secure systems to monitor, distribute and condition electrical power for computer rooms and other sensitive electronics. Its first product was the Datawave magnetic synthesizer, a system that still holds a dominant share of the power conditioning market today. Conditioned Power Corporation operated as a wholly owned subsidiary until 1981, when it became a division of Liebert.
Expansion continued as Liebert broke ground for a second major manufacturing facility in Delaware, Ohio, in 1980. This facility houses the Liebert Research and Development, Heat Transfer and Power Conditioning divisions.
Growing with the industry
Another important milestone for the company was reached in 1981 when Liebert became a public company. The company was taken public under the leadership of Larry Liebert, Ralph Liebert's son, who took over as president in 1980 and served in this position until 1989. The company was represented by the symbol LIEB on the NASDAQ until its acquisition in the late 1980s.
Liebert entered the monitoring market in 1982 with the introduction of Sitemaster. The product allowed for the monitoring and control of multiple Liebert units from a single panel, providing faster response to potential problems and higher uptime for business critical systems.
To further its ability to serve the needs of their computer room customers, Liebert acquired Programmed Power from Franklin Electric in 1983. This brought Liebert into the uninterruptible power supply (UPS) industry. The Power division was expanded and remains a leader in the design and manufacture of UPS systems. That same year, the company opened its first overseas production facility in Cork, Ireland. Continuing its 1983 expansion, Liebert formed its Customer Service and Support division. Today, it is the industry's largest such group.
As Liebert grew, the company never forgot the importance of its dedicated associates. To that end, Liebert opened its own Training Center in 1984. Since that time, thousands of Liebert associates and customers have completed coursework and gained hands-on experience in the operation and maintenance of Liebert air, power and monitoring systems.
Continuing to seek solutions for its customers, Liebert launched SiteScan, a comprehensive site management system, in 1985. This interactive system links all computer support systems in a total network, establishing Liebert as a single source for total, integrated computer support.
In 1987, Liebert Corporation marked yet another milestone when it was acquired by Emerson Electric Company of St. Louis, Mo. With the financial backing of Emerson and the addition of its products and systems, Liebert continued its quest to provide single source, computer support protection.
Leveraging strengths
Already a strong force in the uninterruptible power supply business, Emerson's new acquisition not only expanded an already strong power protection offering, but combined with its leadership in precision air conditioning, the new Liebert provided a unique combination of total systems protection to critical systems of all sizes.
Liebert then continued to expand its ability to protect critical facilities through the acquisition of Control Concepts in the late 1980's. Control Concepts brought a high-quality TVSS (transient voltage surge suppression) product offering to the table.
About this same time, businesses were distributing computers throughout their organization, essentially decentralizing information technology. Liebert responded to this new decentralized and networked model through the establishment of its Distributed Processing Group. This group not only oversaw the manufacturing and marketing of single-phase UPS systems, but also the development of the revolutionary new Little Glass House (later renamed the Foundation MCR), introduced in 1994.
The LGH was the first integrated network support system, allowing smaller but still critical computer systems to be protected with air conditioning, UPS, monitoring and security in a special enclosure. That same year, the company earned ISO 9001 certification standards, further supporting its reputation as a quality supplier.
As computer technologies grew and moved into new industries, Liebert, with the financial backing of Emerson, tailored products to deliver environmental and power protection to computer users, networks and distributed processing applications, industrial processes, lab settings and telecommunications applications.
Channel expansion
As distributed processing needs expanded, Liebert complemented its prestigious and experienced local sales offices with new reseller sales partners. Rather than instituting a competitive channel, Liebert orchestrated a unique relationship between these sales partners that resulted in benefits to all parties.
Continuing its commitment to its customers and its sales partners, Liebert established eCommerce sites for its sales partners in 1998. These sites have become important links between Liebert and its partners, and have enabled them to provide better support to their customers through major enhancements in quoting, order processing, training and selling tools.
Solutions from the grid to the chip
In 2000, Emerson formed Emerson Network Power to bring together its divisions that provide solutions for network and computer protection. Featuring industry-leading brands like Liebert and ASCO and Astec, Emerson Network Power has become a one-stop source for network protection, from power components, power systems and service to climate control and critical space monitoring.
As part of the Emerson Network Power family, Liebert offers a broader range of solutions through its representative network and can more easily extend into new markets. And the organization has not lost its commitment to innovation.
In 2002, the Liebert XD supplemental cooling system was introduced. This solution anticipated the challenge IT organizations would face as they implemented newer, high-density equipment, and served as a blueprint for the industry on how high-density servers could be effectively cooled. The Liebert XD system is also playing an instrumental role in helping IT organizations increase data center efficiency.
This development was followed by the launch of the Liebert DS, the fourth-generation precision cooling system that traces its roots back to the company's earliest days. The Liebert DS introduced variable capacity compressors to the data center and marked the first application of Liebert iCom technology.
New power distribution and UPS solutions have also been introduced. The Liebert FPC and Liebert FDC changed the face of power distribution while delivering the flexibility and scalability data centers with rapidly expanding server populations require. The Liebert NX is a three-phase UPS that features a compact design and high efficiency while pioneering new ways to scale and parallel UPS systems.
As it delivers reliable, flexible and efficient solutions for business-critical systems to customers around the globe, Emerson Network Power is driving the future of mission-critical power and cooling technology. Organizations worldwide depend on Liebert technology to maximize reliability, availability and economy of systems, applications and data in their data centers, computer rooms, network closets and other critical areas. Liebert and Emerson Network Power will continue to provide unmatched quality, superior expertise, local and global service and support, and technology leadership.

computer microphone

A microphone (colloquially called a mic or mike; both pronounced /ˈmaɪk/[1]) is an acoustic-to-electric transducer or sensor that converts sound into an electrical signal. In 1876, Emile Berliner invented the first microphone used as a telephone voice transmitter. Microphones are used in many applications such as telephones, tape recorders, karaoke systems, hearing aids, motion picture production, live and recorded audio engineering, FRS radios, megaphones, in radio and television broadcasting and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes such as ultrasonic checking or knock sensors.
Most microphones today use electromagnetic induction (dynamic microphone), capacitance change (condenser microphone), piezoelectric generation, or light modulation to produce an electrical voltage signal from mechanical vibration.

Condenser microphone

Inside the Oktava 319 condenser microphone
The condenser microphone, invented at Bell Labs in 1916 by E. C. Wente[2] is also called a capacitor microphone or electrostatic microphone. Here, the diaphragm acts as one plate of a capacitor, and the vibrations produce changes in the distance between the plates. There are two types, depending on the method of extracting the audio signal from the transducer: DC-biased and radio frequency (RF) or high frequency (HF) condenser microphones. With a DC-biased microphone, the plates are biased with a fixed charge (Q). The voltage maintained across the capacitor plates changes with the vibrations in the air, according to the capacitance equation (C = Q / V), where Q = charge in coulombs, C = capacitance in farads and V = potential difference in volts. The capacitance of the plates is inversely proportional to the distance between them for a parallel-plate capacitor. (See capacitance for details.) The assembly of fixed and movable plates is called an "element" or "capsule."
A nearly constant charge is maintained on the capacitor. As the capacitance changes, the charge across the capacitor does change very slightly, but at audible frequencies it is sensibly constant. The capacitance of the capsule (around 5 to 100 pF) and the value of the bias resistor (100 megohms to tens of gigohms) form a filter that is high-pass for the audio signal, and low-pass for the bias voltage. Note that the time constant of an RC circuit equals the product of the resistance and capacitance.
Within the time-frame of the capacitance change (as much as 50 ms at 20 Hz audio signal), the charge is practically constant and the voltage across the capacitor changes instantaneously to reflect the change in capacitance. The voltage across the capacitor varies above and below the bias voltage. The voltage difference between the bias and the capacitor is seen across the series resistor. The voltage across the resistor is amplified for performance or recording.
AKG C451B small-diaphragm condenser microphone
RF condenser microphones use a comparatively low RF voltage, generated by a low-noise oscillator. The oscillator may either be amplitude modulated by the capacitance changes produced by the sound waves moving the capsule diaphragm, or the capsule may be part of a resonant circuit that modulates the frequency of the oscillator signal. Demodulation yields a low-noise audio frequency signal with a very low source impedance. The absence of a high bias voltage permits the use of a diaphragm with looser tension, which may be used to achieve wider frequency response due to higher compliance. The RF biasing process results in a lower electrical impedance capsule, a useful by-product of which is that RF condenser microphones can be operated in damp weather conditions that could create problems in DC-biased microphones with contaminated insulating surfaces. The Sennheiser "MKH" series of microphones use the RF biasing technique.
Condenser microphones span the range from telephone transmitters through inexpensive karaoke microphones to high-fidelity recording microphones. They generally produce a high-quality audio signal and are now the popular choice in laboratory and studio recording applications. The inherent suitability of this technology is due to the very small mass that must be moved by the incident sound wave, unlike other microphone types that require the sound wave to do more work. They require a power source, provided either via microphone outputs as phantom power or from a small battery. Power is necessary for establishing the capacitor plate voltage, and is also needed to power the microphone electronics (impedance conversion in the case of electret and DC-polarized microphones, demodulation or detection in the case of RF/HF microphones). Condenser microphones are also available with two diaphragms that can be electrically connected to provide a range of polar patterns (see below), such as cardioid, omnidirectional, and figure-eight. It is also possible to vary the pattern continuously with some microphones, for example the Røde NT2000 or CAD M179.

[edit] Electret condenser microphone

First patent on foil electret microphone by G. M. Sessler et al. (pages 1 to 3)
An electret microphone is a relatively new type of capacitor microphone invented at Bell laboratories in 1962 by Gerhard Sessler and Jim West.[3] The externally applied charge described above under condenser microphones is replaced by a permanent charge in an electret material. An electret is a ferroelectric material that has been permanently electrically charged or polarized. The name comes from electrostatic and magnet; a static charge is embedded in an electret by alignment of the static charges in the material, much the way a magnet is made by aligning the magnetic domains in a piece of iron.
Due to their good performance and ease of manufacture, hence low cost, the vast majority of microphones made today are electret microphones; a semiconductor manufacturer[4] estimates annual production at over one billion units. Nearly all cell-phone, computer, PDA and headset microphones are electret types. They are used in many applications, from high-quality recording and lavalier use to built-in microphones in small sound recording devices and telephones. Though electret microphones were once considered low quality, the best ones can now rival traditional condenser microphones in every respect and can even offer the long-term stability and ultra-flat response needed for a measurement microphone. Unlike other capacitor microphones, they require no polarizing voltage, but often contain an integrated preamplifier that does require power (often incorrectly called polarizing power or bias). This preamplifier is frequently phantom powered in sound reinforcement and studio applications. Microphones designed for personal computer (PC) use, sometimes called multimedia microphones, use a stereo 3.5 mm plug (though a mono source) with the ring receiving power via a resistor from (normally) a 5 V supply in the computer; unfortunately, a number of incompatible dynamic microphones are fitted with 3.5 mm plugs too. While few electret microphones rival the best DC-polarized units in terms of noise level, this is not due to any inherent limitation of the electret. Rather, mass production techniques needed to produce microphones cheaply don't lend themselves to the precision needed to produce the highest quality microphones, due to the tight tolerances required in internal dimensions. These tolerances are the same for all condenser microphones, whether the DC, RF or electret technology is used.

[edit] Dynamic microphone

Patti Smith singing into a Shure SM58 (dynamic cardioid type) microphone
Dynamic microphones work via electromagnetic induction. They are robust, relatively inexpensive and resistant to moisture. This, coupled with their potentially high gain before feedback makes them ideal for on-stage use.
Moving-coil microphones use the same dynamic principle as in a loudspeaker, only reversed. A small movable induction coil, positioned in the magnetic field of a permanent magnet, is attached to the diaphragm. When sound enters through the windscreen of the microphone, the sound wave moves the diaphragm. When the diaphragm vibrates, the coil moves in the magnetic field, producing a varying current in the coil through electromagnetic induction. A single dynamic membrane does not respond linearly to all audio frequencies. Some microphones for this reason utilize multiple membranes for the different parts of the audio spectrum and then combine the resulting signals. Combining the multiple signals correctly is difficult and designs that do this are rare and tend to be expensive. There are on the other hand several designs that are more specifically aimed towards isolated parts of the audio spectrum. The AKG D 112, for example, is designed for bass response rather than treble.[5] In audio engineering several kinds of microphones are often used at the same time to get the best result.

[edit] Ribbon Microphone

Edmund Lowe using a ribbon microphone
Ribbon microphones use a thin, usually corrugated metal ribbon suspended in a magnetic field. The ribbon is electrically connected to the microphone's output, and its vibration within the magnetic field generates the electrical signal. Ribbon microphones are similar to moving coil microphones in the sense that both produce sound by means of magnetic induction. Basic ribbon microphones detect sound in a bi-directional (also called figure-eight) pattern because the ribbon, which is open to sound both front and back, responds to the pressure gradient rather than the sound pressure. Though the symmetrical front and rear pickup can be a nuisance in normal stereo recording, the high side rejection can be used to advantage by positioning a ribbon microphone horizontally, for example above cymbals, so that the rear lobe picks up only sound from the cymbals. Crossed figure 8, or Blumlein pair, stereo recording is gaining in popularity, and the figure 8 response of a ribbon microphone is ideal for that application.
Other directional patterns are produced by enclosing one side of the ribbon in an acoustic trap or baffle, allowing sound to reach only one side. The classic RCA Type 77-DX microphone has several externally adjustable positions of the internal baffle, allowing the selection of several response patterns ranging from "Figure-8" to "Unidirectional". Such older ribbon microphones, some of which still provide high quality sound reproduction, were once valued for this reason, but a good low-frequency response could only be obtained when the ribbon was suspended very loosely, which made them relatively fragile. Modern ribbon materials, including new nanomaterials[6] have now been introduced that eliminate those concerns, and even improve the effective dynamic range of ribbon microphones at low frequencies. Protective wind screens can reduce the danger of damaging a vintage ribbon, and also reduce plosive artifacts in the recording. Properly designed wind screens produce negligible treble attenuation. In common with other classes of dynamic microphone, ribbon microphones don't require phantom power; in fact, this voltage can damage some older ribbon microphones. Some new modern ribbon microphone designs incorporate a preamplifier and, therefore, do require phantom power, and circuits of modern passive ribbon microphones, i.e., those without the aforementioned preamplifier, are specifically designed to resist damage to the ribbon and transformer by phantom power. Also there are new ribbon materials available that are immune to wind blasts and phantom power.

computer monitor

A 19-inch LG flat-panel LCD monitor.
A monitor or display (sometimes called a visual display unit) is an electronic visual display for computers. The monitor comprises the display device, circuitry, and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors use a cathode ray tube about as deep as the screen size.
Originally computer monitors were used for data processing and television receivers for entertainment; increasingly computers are being used both for data processing and entertainment. Displays exclusively for data use tend to have an aspect ratio of 4:3; those used also (or solely) for entertainment are usually 16:9

connector used on the video cable. The disadvantage of TTL monitors was the limited number of colors available due to the low number of digital bits used for video signaling. Modern monochrome monitors use the same 15-pin SVGA connector as standard color monitors. They are capable of displaying 32-bit grayscale at 1024x768 resolution, making them able to interface with modern computers.
TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four shades were possible; black, dim, medium or bright.
CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signaling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey. A CGA monitor is only capable of rendering 16 colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.
EGA monitors used six digital signals to control the three electron guns in a signaling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 colors.
Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as an MDA, or CGA adapter if a monochrome or CGA monitor was used in place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used.

computer processors

Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.
The central processing unit (CPU) is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions. The central processing unit carries out each instruction of the program in sequence, to perform the basic arithmetical, logical, and input/output operations of the system. This term has been in use in the computer industry at least since the early 1960s. [1] The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same.
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.
Computers such as the ENIAC had to be physically rewired in order to perform different tasks, which caused these machines to be called "fixed-program computers." Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was even completed, mathematician John von Neumann distributed the paper entitled "First Draft of a Report on the EDVAC". It outlined the design of a stored-program computer that would eventually be completed in August 1949.[2] EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory.
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.
As a digital device, a CPU is limited to a set of discrete states, and requires some kind of switching elements to differentiate between and change states. Prior to commercial development of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational, and they eventually cease to function due to slow contamination of their cathodes that occurs in the course of normal operation. If a tube's vacuum seal leaks, as sometimes happens, cathode contamination is accelerated. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failed component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.
Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely.[1] In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.

[edit] Discrete transistor and integrated circuit CPUs

CPU, core memory, and external bus interface of a DEC PDP-8/I. made of medium-scale integrated circuits
The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements like vacuum tubes and electrical relays. With this improvement more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.
During this period, a method of manufacturing many transistors in a compact space gained popularity. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip." At first only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based upon these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo guidance computer, usually contained transistor counts numbering in multiples of ten. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. As microelectronic technology advanced, an increasing number of transistors were placed on ICs, thus decreasing the quantity of individual ICs needed for a complete CPU. MSI and LSI (medium- and large-scale integration) ICs increased transistor counts to hundreds, and then thousands.
In 1964 IBM introduced its System/360 computer architecture which was used in a series of computers that could run the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM utilized the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs.[3] The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In the same year (1964), Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line that originally was built with SSI ICs but was eventually implemented with LSI components once these became practical. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits.[4]
Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of the switching elements (which were almost exclusively transistors by this time), CPU clock rates in the tens of megahertz were obtained during this period. Additionally while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc.

[edit] Microprocessors

The die from an Intel 8742
Intel 80486DX2 microprocessor in a ceramic PGA package.
The introduction of the microprocessor in the 1970s significantly affected the design and implementation of CPUs. Since the introduction of the first commercially available microprocessor (the Intel 4004) in 1970 and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term "CPU" is now applied almost exclusively to microprocessors.
Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date.
While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.

computer motherboard

A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also connects the central processing unit and hosts other subsystems and devices.
A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself.
An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.
Modern motherboards include, at a minimum:
  • sockets (or slots) in which one or more microprocessors may be installed[3]
  • slots into which the system's main memory is to be installed (typically in the form of DIMM modules containing DRAM chips)
  • a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses
  • non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
  • a clock generator which produces the system clock signal to synchronize the various components
  • slots for expansion cards (these interface to the system via the buses supported by the chipset)
  • power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.[4]
The Octek Jaguar V motherboard from 1993.[5] This board has 6 ISA slots but few onboard peripherals, as evidenced by the lack of external connectors.
Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as PS/2 connectors for a mouse and keyboard. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards.
Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat.

[edit] CPU sockets

A CPU socket or slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture on the motherboard. A CPU socket type and motherboard chipset must support the CPU series and speed.

[edit] Integrated peripherals

Block diagram of a modern motherboard, which supports many on-board peripheral functions as well as several expansion slots.
With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers.
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on AMD processors, has on-board support for a very large range of peripherals:
Expansion cards to support all of these functions would have cost hundreds of dollars even a decade ago; however, as of April 2007 such highly integrated motherboards are available for as little as $30 in the USA.

[edit] Peripheral card slots

A typical motherboard of 2009 will have a different number of connections depending on its standard.
A standard ATX motherboard will typically have one PCI-E 16x connection for a graphics card, two conventional PCI slots for various expansion cards, and one PCI-E 1x (which will eventually supersede PCI). A standard EATX motherboard will have one PCI-E 16x connection for a graphics card, and a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. (This varies between brands and models.)
Some motherboards have two PCI-E 16x slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming and video editing.
As of 2007, virtually all motherboards come with at least four USB ports on the rear, with at least 2 connections on the board internally for wiring additional front ports that may be built into the computer's case. Ethernet is also included. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard, to allow sound output without the need for any extra components. This allows computers to be far more multimedia-based than before. Some motherboards have their graphics chip built into the motherboard rather than needing a separate card. A separate card may still be used.

[edit] Temperature and reliability

Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the Northbridge, in modern motherboards. If the motherboard is not cooled properly, it can cause the computer to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since then, most have required CPU fans mounted on their heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional case fans as well. Newer motherboards have integrated temperature sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Some computers (which typically have high-performance microprocessors, large amounts of RAM, and high-performance video cards) use a water-cooling system instead of many fans.
Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as careful layout of the motherboard and other components to allow for heat sink placement.
A 2003 study[7] found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation.[8]
For more information on premature capacitor failure on PC motherboards, see capacitor plague.
Motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at 105 °C,[9] their expected design life roughly doubles for every 10 °C below this. At 45 °C a lifetime of 15 years can be expected. This appears reasonable for a computer motherboard. However, many manufacturers have delivered substandard capacitors,[citation needed] which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures easily exacerbate this problem. It is possible, but tedious and time-consuming, to find and replace failed capacitors on PC motherboards.

[edit] Form factor

microATX form factor motherboard
Motherboards are produced in a variety of sizes and shapes called computer form factor, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible to fit various case sizes. As of 2007, most desktop computer motherboards use one of these standard form factors—even those found in Macintosh and Sun computers, which have not traditionally been built from commodity components. The current desktop PC form factor of choice is ATX. A case's motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard.
Laptop computers generally use highly integrated, miniaturized and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard due to the large number of integrated components.