Wednesday, December 1, 2010

computer motherboard

A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also connects the central processing unit and hosts other subsystems and devices.
A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself.
An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.
Modern motherboards include, at a minimum:
  • sockets (or slots) in which one or more microprocessors may be installed[3]
  • slots into which the system's main memory is to be installed (typically in the form of DIMM modules containing DRAM chips)
  • a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses
  • non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
  • a clock generator which produces the system clock signal to synchronize the various components
  • slots for expansion cards (these interface to the system via the buses supported by the chipset)
  • power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.[4]
The Octek Jaguar V motherboard from 1993.[5] This board has 6 ISA slots but few onboard peripherals, as evidenced by the lack of external connectors.
Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as PS/2 connectors for a mouse and keyboard. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards.
Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat.

[edit] CPU sockets

A CPU socket or slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture on the motherboard. A CPU socket type and motherboard chipset must support the CPU series and speed.

[edit] Integrated peripherals

Block diagram of a modern motherboard, which supports many on-board peripheral functions as well as several expansion slots.
With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers.
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on AMD processors, has on-board support for a very large range of peripherals:
Expansion cards to support all of these functions would have cost hundreds of dollars even a decade ago; however, as of April 2007 such highly integrated motherboards are available for as little as $30 in the USA.

[edit] Peripheral card slots

A typical motherboard of 2009 will have a different number of connections depending on its standard.
A standard ATX motherboard will typically have one PCI-E 16x connection for a graphics card, two conventional PCI slots for various expansion cards, and one PCI-E 1x (which will eventually supersede PCI). A standard EATX motherboard will have one PCI-E 16x connection for a graphics card, and a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. (This varies between brands and models.)
Some motherboards have two PCI-E 16x slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming and video editing.
As of 2007, virtually all motherboards come with at least four USB ports on the rear, with at least 2 connections on the board internally for wiring additional front ports that may be built into the computer's case. Ethernet is also included. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard, to allow sound output without the need for any extra components. This allows computers to be far more multimedia-based than before. Some motherboards have their graphics chip built into the motherboard rather than needing a separate card. A separate card may still be used.

[edit] Temperature and reliability

Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the Northbridge, in modern motherboards. If the motherboard is not cooled properly, it can cause the computer to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since then, most have required CPU fans mounted on their heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional case fans as well. Newer motherboards have integrated temperature sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Some computers (which typically have high-performance microprocessors, large amounts of RAM, and high-performance video cards) use a water-cooling system instead of many fans.
Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as careful layout of the motherboard and other components to allow for heat sink placement.
A 2003 study[7] found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation.[8]
For more information on premature capacitor failure on PC motherboards, see capacitor plague.
Motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at 105 °C,[9] their expected design life roughly doubles for every 10 °C below this. At 45 °C a lifetime of 15 years can be expected. This appears reasonable for a computer motherboard. However, many manufacturers have delivered substandard capacitors,[citation needed] which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures easily exacerbate this problem. It is possible, but tedious and time-consuming, to find and replace failed capacitors on PC motherboards.

[edit] Form factor

microATX form factor motherboard
Motherboards are produced in a variety of sizes and shapes called computer form factor, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible to fit various case sizes. As of 2007, most desktop computer motherboards use one of these standard form factors—even those found in Macintosh and Sun computers, which have not traditionally been built from commodity components. The current desktop PC form factor of choice is ATX. A case's motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard.
Laptop computers generally use highly integrated, miniaturized and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard due to the large number of integrated components.

computer memory (RAM)

 




                                                                                                  
Random-access memory (RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). "Random" refers to the idea that any piece of data can be returned in a constant time, regardless of its physical location and whether it is related to the previous piece of data.[1]
The word "RAM" is often associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM as well, including most types of ROM and a type of flash memory called NOR-Flash.


Types of RAM

Top L-R, DDR2 with heat-spreader, DDR2 without heat-spreader, Laptop DDR2, DDR, Laptop DDR
1 Megabit chip - one of the last models developed by VEB Carl Zeiss Jena in 1989
Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them. Of special consideration is SIMM and DIMM memory modules.
SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as persistent storage in traditional computers. Many newer products instead rely on flash memory to maintain data when not in use, such as PDAs or small music players. Certain personal computers, such as many rugged computers and netbooks, have also replaced magnetic disks with flash drives. With flash memory, only the NOR type is capable of true random access, allowing direct code execution, and is therefore often used instead of ROM; the lower cost NAND type is commonly used for bulk storage in memory cards and solid-state drives.
A memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information — a 0 or a 1 . The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.

[edit] Memory hierarchy

Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the higher possible average access performance while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.

Hard disk

Diagram of a computer hard disk drive
HDDs record data by magnetizing ferromagnetic material directionally. Sequential changes in the direction of magnetization represent patterns of binary data bits. The data are read from the disk by detecting the transitions in magnetization and decoding the originally written data. Different encoding schemes, such as Modified Frequency Modulation, group code recording, run-length limited encoding, and others are used.
A typical HDD design consists of a spindle that holds flat circular disks called platters, onto which the data are recorded. The platters are made from a non-magnetic material, usually aluminum alloy or glass, and are coated with a shallow layer of magnetic material, typically 10–20 nm in depth—for reference, standard copy paper is 0.07–0.18 millimetre (70,000–180,000 nm)[5]—with an outer layer of carbon for protection.
A cross section of the magnetic surface in action. In this case the binary data are encoded using frequency modulation.
Perpendicular recording
The platters are spun at speeds varying from 3,000 RPM in energy-efficient portable devices, to 15,000 RPM for high performance servers. Information is written to, and read from a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometers in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. In modern drives there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor.
The magnetic surface of each platter is conceptually divided into many small sub-micrometer-sized magnetic regions referred to as magnetic domains. In older disk designs the regions were oriented horizontally and parallel to the disk surface, but beginning about 2005, the orientation was changed to perpendicular to allow for closer magnetic domain spacing. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a magnetic field.
For reliable storage of data, the recording material needs to resist self-demagnetization, which occurs when the magnetic domains repel each other. Magnetic domains written too densely together to a weakly magnetizable material will degrade over time due to physical rotation of one or more domains to cancel out these forces. The domains rotate sideways to a halfway position that weakens the readability of the domain and relieves the magnetic stresses. Older hard disks used iron(III) oxide as the magnetic material, but current disks use a cobalt-based alloy.[6]
A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed "giant" magnetoresistance (GMR). In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.[7]
The heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at or near the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. This forms a type of air bearing.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other.[8] Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005,[9] and as of 2007 the technology was used in many HDDs.[10][11][12]

[edit] Components

A hard disk drive with the platters and motor hub removed showing the copper colored stator coils surrounding a bearing at the center of the spindle motor. The orange stripe along the side of the arm is a thin printed-circuit cable. The spindle bearing is in the center.
A typical hard drive has two electric motors, one to spin the disks and one to position the read/write head assembly. The disk motor has an external rotor attached to the platters; the stator windings are fixed in place. The actuator has a read-write head under the tip of its very end (near center); a thin printed-circuit cable connects the read-write head to the hub of the actuator. A flexible, somewhat U-shaped, ribbon cable, seen edge-on below and to the left of the actuator arm in the first image and more clearly in the second, continues the connection from the head to the controller board on the opposite side.
The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 Gs.
The silver-colored structure at the upper left of the first image is the top plate of the permanent-magnet and moving coil motor that swings the heads to the desired position (it is shown removed in the second image). The plate supports a squat neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives only have one magnet).
The voice coil itself is shaped rather like an arrowhead, and made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the actuator. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.

[edit] Error handling

Modern drives also make extensive use of Error Correcting Codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits for each block of data that are determined by mathematical formulas. The extra bits allow many errors to be fixed. While these extra bits take up space on the hard drive, they allow higher recording densities to be employed, resulting in much larger storage capacity for user data.[13] In 2009, in the newest drives, low-density parity-check codes (LDPC) are supplanting Reed-Solomon. LDPC codes enable performance close to the Shannon Limit and thus allow for the highest storage density available.[14]
Typical hard drives attempt to "remap" the data in a physical sector that is going bad to a spare physical sector—hopefully while the errors in that bad sector are still few enough that the ECC can recover the data without loss. The S.M.A.R.T. system counts the total number of errors in the entire hard drive fixed by ECC, and the total number of remappings, in an attempt to predict hard drive failure.

Mouse

The trackball was invented by Tom Cranston, Fred Longstaff and Kenyon Taylor working on the Royal Canadian Navy's DATAR project in 1952. It used a standard Canadian five-pin bowling ball. It was not patented, as it was a secret military project.[2]
Independently, Douglas Engelbart at the Stanford Research Institute invented the first mouse prototype in 1963,[3] with the assistance of his colleague Bill English. They christened the device mouse as early models had a cord attached to the rear part of the device suggesting a tail and generally resembling the common mouse.[4] Engelbart never received any royalties for it, as his patent ran out before it became widely used in personal computers.[5]
The invention of the mouse was just a small part of Engelbart's much larger project, aimed at augmenting human intellect.[6]
The first computer mouse, held by inventor Douglas Engelbart, showing the wheels that make contact with the working surface
Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its simplicity and convenience. The first mouse, a bulky device (pictured) used two gear-wheels perpendicular to each other: the rotation of each wheel translated into motion along one axis. Engelbart received patent US3,541,541 on November 17, 1970 for an "X-Y Position Indicator for a Display System".[7] At the time, Engelbart envisaged that users would hold the mouse continuously in one hand and type on a five-key chord keyset with the other.[8] The concept was preceded in the 19th century by the telautograph, which also anticipated the fax machine.
A Smaky mouse, as invented at the EPFL by Jean-Daniel Nicoud and André Guignard
Just a few weeks before Engelbart released his demo in 1968, a mouse has already been developed and published by the German company Telefunken. Unlike Engelbarts mouse, the Telefunken model had a ball, as it can be seen in most later models until today. Since 1970 it was shipped as a part and sold together with Telefunken Computers. Some models from the year 1972 are still well preserved[9]
The second marketed integrated mouse – shipped as a part of a computer and intended for personal computer navigation – came with the Xerox 8010 Star Information System in 1981. However, the mouse remained relatively obscure until the appearance of the Apple MacIntosh, including an updated version of the original Lisa Mouse. In 1984 PC columnist John C. Dvorak dismissively commented on the newly-released computer with a mouse: "There is no evidence that people want to use these things".[10][11]

[edit] Variants

[edit] Mechanical mice

Mouse mechanism diagram.svg
Operating an opto-mechanical mouse.
  1. moving the mouse turns the ball.
  2. X and Y rollers grip the ball and transfer movement
  3. Optical encoding disks include light holes.
  4. Infrared LEDs shine through the disks.
  5. Sensors gather light pulses to convert to X and Y vectors.
Bill English, builder of Engelbart's original mouse,[12] invented the ball mouse in 1972 while working for Xerox PARC.[13]
The ball-mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.
Mechanical mouse, shown with the top cover removed
The ball mouse has two freely rotating rollers. They are located 90 degrees apart. One roller detects the forward–backward motion of the mouse and other the left–right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement.
Each wheel's disc, however, has a pair of light beams, located so that a given beam becomes interrupted, or again starts to pass light freely, when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. (This scheme is sometimes called "quadrature encoding" or some similar term by technical people.) The mouse sends these signals to the computer system via a data-formatting IC and the mouse cable. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the screen.
The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately.
Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975.[14][15]
Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse.[16][17] Instead of a ball, it had two wheels rotating at off axes. Keytronic later produced a similar product.[18]
Modern computer mice took form at the École polytechnique fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard.[19] This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s.[20] In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design.[21] Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent;"[21] though optical mice from Mouse Systems had incorporated microprocessors by 1984.[22]
Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug-compatible with an analog joystick. The "Color Mouse", originally marketed by Radio Shack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.

[edit] Optical mice

A wireless optical mouse on a mouse pad
An optical mouse uses a light-emitting diode and photodiodes to detect movement relative to the underlying surface, rather than internal moving parts as does a mechanical mouse.

[edit] Inertial and gyroscopic mice

Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use.[23] In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.

[edit] 3D mice

Also known as bats,[24] flying mice, or wands,[25] these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3DConnexion/Logitech's SpaceMouse from the early 1990s.
In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station.[26] Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.
A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar.
In February, 2008, at the Game Developers' Conference (GDC), a company called Motion4U introduced a 3D mouse add-on called "OptiBurst" for Autodesk's Maya application. The mouse allows users to work in true 3D with 6 degrees of freedom.[citation needed] The primary advantage of this system is speed of development with organic (natural) movement.
A mouse-related controller called the SpaceBall™ [27] has a ball placed above the work surface that can easily be gripped. With spring-loaded centering, it sends both translational as well as angular displacements on all six axes, in both directions for each.

[edit] Tactile mice

In 2000, Logitech introduced the "tactile mouse", which contained a small actuator that made the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf by touch requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice[28] but never marketed.

[edit] Connectivity and communication protocols

An MS wireless Arc mouse
To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses.
While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer.
Mouse use in DOS applications became more common after the introduction of the Microsoft mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a Microsoft compatible driver (even if the mouse hardware itself was incompatible with Microsoft's). An interesting footnote is that the Microsoft driver standard communicates mouse movements in standard units called "mickeys".

[edit] Serial interface and protocol

Standard PC mice once used the RS-232C serial port via a D-subminiature connector, which provided power to run the mouse's circuits as well as data on mouse movements. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used an incompatible three-byte protocol and only allowed for two buttons. Due to the incompatibility, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode.[29]

keyboard history

A QWERTY keyboard on a laptop computer
QWERTY (pronounced /ˈkwɜrti/) is the most used modern-day keyboard layout. The QWERTY design is based on a layout created by Christopher Latham Sholes in 1873 for the Sholes and Glidden typewriter and sold to Remington in the same year, when it first appeared in typewriters. It became popular with the success of the Remington No. 2 and No. 3 and No. 389 of 1878, and remains in use on electronic keyboards due to the network effect of a standard layout and a belief that alternatives fail to provide very significant advantages.[1] The use and adoption of the QWERTY keyboard is often viewed as one of the most important case studies in open standards because of the widespread, collective adoption and use of the product, particularly in the United States.[2]

Differences from modern layout

Latham Sholes' 1878 QWERTY keyboard layout
The QWERTY layout depicted in Sholes' 1878 patent includes a few differences from the modern layout, most notably in the absence of the numerals 0 and 1, with each of the remaining numerals shifted one position to the left of their modern counterparts. The letter M is located at the end of the third row to the right of the letter L rather than on the fourth row to the right of the N, the letters C and X are reversed, and most punctuation marks are in different positions or are missing entirely.[8] 0 and 1 were omitted to simplify the design and reduce the manufacturing and maintenance costs; they were chosen specifically because they were "redundant" and could be recreated using other keys. Typists who learned on these machines learned the habit of using the uppercase letter I (or lowercase letter L) for the digit one, and the uppercase O for the zero.[9] The exclamation point, which shares a key with the numeral 1 on modern keyboards, could be reproduced by using a three-stroke combination of an apostrophe, a backspace, and a period. The 0 key was added and standardized in its modern position early in the history of the typewriter, but the 1 and exclamation point were left off some typewriter keyboards into the 1970s.[10]

Contemporary alternatives

There was no particular technological requirement for the QWERTY layout,[clarification needed] since at the time there were ways to make a typewriter without the "up-stroke" typebar mechanism that had required it to be devised. Not only were there rival machines with "down-stroke" and "frontstroke" positions that gave a visible printing point, the problem of typebar clashes could be circumvented completely: examples include Thomas Edison's 1872 electric print-wheel device which later became the basis for Teletype machines; Lucien Stephen Crandall's typewriter (the second to come onto the American market) whose type was arranged on a cylindrical sleeve; the Hammond typewriter of 1887 which used a semi-circular "type-shuttle" of hardened rubber (later light metal); and the Blickensderfer typewriter of 1893 which used a type wheel. The early Blickensderfer's "Ideal" keyboard was also non-QWERTY, instead having the sequence "DHIATENSOR" in the home row, these 10 letters being capable of composing 70% of the words in the English language.[6]

Properties

Alternating hands while typing is a desirable trait in a keyboard design, since while one hand is typing a letter, the other hand can get in position to type the next letter. Thus, a typist may fall into a steady rhythm and type quickly. However, when a string of letters is done with the same hand, the chances of stuttering are increased and a rhythm can be broken, thus decreasing speed and increasing errors and fatigue. In the QWERTY layout many more words can be spelled using only the left hand than the right hand. In fact, thousands of English words can be spelled using only the left hand, while only a couple of hundred words can be typed using only the right hand. In addition, most typing strokes are done with the left hand in the QWERTY layout. This is helpful for left-handed people but to the disadvantage of right-handed people.[11]

Computer keyboards

The standard QWERTY keyboard layout used in the US. Some countries, such as the UK and Canada, use a slightly different QWERTY (the @ and " are switched in the UK); see keyboard layout.
The first computer terminals such as the Teletype were typewriters that could produce and be controlled by various computer codes. These used the QWERTY layouts, and added keys such as escape (ESC) which had special meanings to computers. Later keyboards added function keys and arrow keys. Since the standardization of PC-compatible computers and Windows after the 1980s, most full-sized computer keyboards have followed this standard (see drawing at right). This layout has a separate numeric keypad for data entry at the right, 12 function keys across the top, and a cursor section to the right and center with keys for Insert, Delete, Home, End, Page Up, and Page Down with cursor arrows in an inverted-T shape.

Diacritical marks and international variants

Different computer operating systems have methods of support for input of different languages such as Chinese, Hebrew or Arabic. QWERTY is designed for English, a language without any diacritical marks. QWERTY keyboards meet issues when having to type an accent. Until recently, no norm was defined for a standard QWERTY keyboard layout allowing the typing of accented characters, apart from the US-International layout.
Depending on the operating system and sometimes the application program being used, there are many ways to generate Latin characters with accents.

UK-Extended Layout

Microsoft Windows XP SP2 and above provide the UK-Extended layout that behaves exactly the same as the standard UK layout for all the characters it can generate, but can additionally generate a number of diacritical marks, useful when working with text in other languages (including Welsh - a UK language). Not all combinations work on all keyboards.
  • acute accents (e.g. á) on a,e,i,o,u,w,y,A,E,I,O,U,W,Y are generated by pressing the AltGr key together with the letter, or AltGr and apostrophe, followed by the letter (see note below);
  • grave accents (e.g. è) on a,e,i,o,u,w,y,A,E,I,O,U,W,Y are generated by pressing the backquote (`) [which is now a dead key], then the letter;
  • circumflex (e.g. â) on a,e,i,o,u,w,y,A,E,I,O,U,W,Y is generated by AltGr and 6, followed by the letter;
  • trema (e.g. ö) on a,e,i,o,u,w,y,A,E,I,O,U,W,Y is generated by AltGr and 2, then the letter;
  • tilde (e.g. ã) on a,n,o,A,N,O is generated by AltGr and #, then the letter;
  • cedilla (e.g. ç) under c,C is generated by AltGr and the letter.
These combinations are designed to be easy to remember, as the circumflex accent (e.g. â) is similar to a caret (^), printed above the 6 key; the diaeresis (e.g. ö) is similar to the double-quote (") above 2 on the UK keyboard; the tilde (~) is printed on the same key as the #.
Like US-International, UK-Extended does not cater for many languages written with Latin characters, including Romanian and Turkish, or any using different character sets such as Greek and Russian.
Notes:
  • The AltGr and letter method used for acutes and cedillas does not work for applications which assign shortcut menu functions to these key combinations. For acute accents the AltGr and apostrophe method should be used.

Other keys and characters

International variants

Minor changes to the arrangement are made for other languages.

Alternatives to QWERTY

Turkish F keyboard.
The modern Dvorak Simplified Keyboard layout, the leading alternative keyboard layout to QWERTY.
Several alternatives to QWERTY have been developed over the years, claimed by their designers and users to be more efficient, intuitive and ergonomic. Nevertheless, none has seen widespread adoption, due partly to the sheer dominance of available keyboards and training.[12] Although studies have shown the superiority in typing speed afforded by alternative keyboard layouts[13] economists Stan Liebowitz and Stephen E Margolis have claimed that these studies are flawed and more rigorous studies are inconclusive as to whether they actually offer any real benefits.[1] The most widely used such alternative is the Dvorak Simplified Keyboard; another increasingly popular alternative is Colemak, which is based partly on QWERTY and is therefore easier for an existing QWERTY typist to learn while offering several optimisations.[14] Most modern computer operating systems support this and other alternative mappings with appropriate special mode settings, but few keyboards are manufactured with keys labeled according to this standard.