How to Overclock – how to Tweak Your PC to Unleash Its Power

0
85

< Previous | Contents | Next >

Chapter 4: How to Overclock

Motherboard Configuration

Overclocking involves manipulating the processor’s multiplier and the motherboard’s front-side bus speed, in small increments, until a maximum stable operating frequency is reached. The idea is simple, but variation in both the electrical and physical characteristics of x86 computing systems complicates this process.

Processor multipliers, bus dividers, voltages, thermal loads, cooling techniques, and many other issues can affect your ability to push any given system to its maximum potential.

On most systems, processor multiplier values, motherboard bus speeds, and voltage levels can be adjusted, either through hardware-level jumpers and dipswitches or firmware BIOS settings. The brand and model of the motherboard determine how easy and effective the process will be. Most boards allow you to configure at least a portion of these settings, though many low-end and original equipment manufacturer (OEM) designs opt for autodetection routines that prevent manual manipulation.

image

Figure 4-1: Jumper configuration

Jumpers and dipswitches are the predominant methods for adjusting motherboard values in many computing platforms. Jumpers are small electrically conductive devices that push into place over a series of pins to establish an electrical connection (essentially, a removable on/off switch). Jumper points are usually arranged in block patterns, each jumper connecting two pins within the series. Connecting a series of pins in a specific sequence within the block creates the signaling data required to set parameters for proper motherboard operation.

image

Figure 4-2: Dipswitch configuration

Dipswitches are tiny switching devices, usually found in groups among a single interface block. Electrically, dipswitches work the same way their jumper cousins do. The dipswitch design was introduced to simplify the motherboard configuration process. Dipswitches are available in a variety of sizes. The smallest types require particular care because they can be damaged easily, especially after multiple changes in position or through the overexertion of force.

Many of the latest motherboard architectures allow for advanced hardware configuration through the system’s CMOS BIOS Setup. Methods of entering the BIOS interface vary according to brand, but basic procedures are generic. Most systems prompt for a specific keystroke to enter the BIOS Setup menu. The most common of these are DEL and F2, but others include DEL-ESC, CTRL-ESC, and F10, F12, CTRL-ALT, ESC, CTRL-ALT-ENTER, CTRL-ALT-F1, CTRL-ALT-S, and simply ESC.

If your system boots with a custom graphics screen, you can often press the ESC key to bypass it and view the standard interface. Custom boot screens are common in OEM-built systems.

No two motherboards are alike, so it is nearly impossible to determine how to alter hardware settings without researching the documentation provided by the motherboard manufacturer or system integrator. Some companies even choose to implement a combination of hardware and BIOS-level configuration options. They may use both jumpers or dipswitches and a BIOS Setup menu in order to appeal to both the OEM and retail markets.

Preferred Motherboards

Retail-level manufacturers usually want to maximize system configuration options, so their motherboards are likely to be easier to tweak. In contrast, prebuilt systems from larger OEMs and system integrators often lack advanced user-definable options.

Prebuilt systems are engineered for maximum stability across the widest range of users, so the incentive to allow user configuration of hardware is limited.

Taiwan-based Abit Computer Corporation is perhaps the most popular of these retail- level companies. Its motherboard designs support many customizable options.

Companies like Asus, Epox, Gigabyte, and Transcend also offer great designs for the enthusiast market. Nearly all motherboards allow some overclocking, either through hardware or software. Feature sets vary widely, however, even among similar motherboard models from the same manufacturer.

Motherboards may contain only some of the features that would facilitate overclocking. Optimal support would include the ability to manipulate the processor’s multiplier, configure processor-to-chipset bus speeds, and set processor core and motherboard input/output voltages. A feature called active thermal monitoring, which uses onboard sensors to maintain optimum temperature at extended operating speeds, also promotes stability and improves overclocking capability.

Overclocking via Processor Multiplier

Manipulating the processor multiplier is the optimal overclocking method, since it neither interrupts nor changes motherboard-level bus speeds. The processor

multiplier number that you select in your BIOS Setup menu (see Figure 4-3), or via dipswitches or jumpers on your motherboard, will determine your processor’s operating frequency since the processor will multiply the motherboard’s front-side- bus frequency by the processor multiplier. Therefore, by increasing the processor multiplier beyond its default setting, you will increase your processor’s operating frequency beyond its default as well.

image

Figure 4-3: Award BIOS configuration

System stability can only be compromised if the maximum operating frequency of the processor’s core is exceeded. Maximum performance potential is best realized by combining several overclocking techniques, but multiplier overclocking is a favorite of many enthusiasts because it creates fewer problems.

image

Figure 4-4: Multiplier configuration example

Depending on your system hardware, overclocking through multiplier manipulation alone may be impractical. For example, the most recent Intel processors feature a locked core multiplier, except for the earliest Pentium II-based designs and the occasional unlocked engineering sample that surfaces in the underground market. All current and near-future Intel processors are completely locked, thus forcing owners to rely on front-side bus overclocking techniques.

Knowing your motherboard is critical to assessing the overclocking potential of any current AMD Athlon system. The majority of Athlon-based motherboards lack the features users need to control multiplier values. The required circuitry increases manufacturing costs. Those willing to risk hardware-level modifications can overcome this limitation.

Overclocking via Front-side Bus

Front-side (or processor-to-chipset) overclocking is the best way to maximize system performance, especially when it can be combined with multiplier overclocking. If your system lacks multiplier adjustment capabilities, you must rely solely on bus overclocking at the motherboard level. The difficulty lies in the fact that overclocking the front-side bus can affect the rates of all buses throughout the system.

image

Figure 4-5: Front-side bus configuration example

The front-side bus rate is linked with other bus rates in most x86 systems. The peripheral component interconnect bus, or PCI, the accelerated graphics port bus, or AGP, and the various memory buses, are examples of this design paradigm. Each of the system’s interconnect buses serve to connect various devices to the processor, and each operates at a rate fractional to the operating rate of the front-side bus.

While not all motherboard chipsets offer identical capabilities, most follow industry design specifications for compatibility reasons.

The Memory Bus

The memory bus can operate in one of two modes, synchronous or asynchronous. Synchronous operation means that the memory bus operates at the same base frequency as the front-side bus. The synchronous memory bus is the simplest architecture to manipulate, though it may not be best for maximizing overclocking potential. Asynchronous operation allows the memory bus to function at a different rate than the front-side bus. Asynchronous designs can be based on incremental

frequency changes related to the front-side bus frequency or entirely on independent rates.

Many motherboards are able to operate in either synchronous or asynchronous memory access modes. The ability to change the front-side bus frequency depends on the memory access mode in use. Quality memory, capable of stable operation at extended frequencies, is preferred. As expected, different platforms react differently to memory overclocking.

Old designs using 30- or 72-pin single inline memory modules (SIMMs), like fast- page or extended data out (EDO) memory, tend to become unstable at relatively low operating speeds during overclocking. The older 30-pin designs can rarely scale beyond 40 MHz, while 72-pin designs generally reach their maximum around 83 MHz. The need for asynchronous bus operation with such architectures became evident as processor-to-chipset rates began to outpace memory capabilities.

image

Figure 4-6: RAMBUS memory example

Asynchronous memory operation became even more necessary with the adoption of SDRAM, DDR RAM, and RAMBUS memory technologies. Early PC-66 memory modules were, at best, suspect for overclocking. Later fabrication techniques allowed successful scaling to higher operating speeds, up to 166+ MHz with the PC-166 modules. Asynchronous operation does insert longer latencies in the chipset-to- memory pipeline; however, the benefits of greater bandwidth commonly outweigh such penalties. For this reason most non-Intel-based motherboards allow users to raise or lower the memory bus speed in relation to the front-side bus speed.

image

Figure 4-7: Common bus rates

The PCI Bus

The PCI bus speed is derived from the front-side bus speed. The PCI 2.x specification defines 33 MHz as the default bus frequency, though most of today’s

better components can scale to 40 MHz and beyond. In most systems, the PCI bus speed is a fraction of the front-side bus speed. For example, the Pentium IIIe uses a 100-MHz front-side bus. A 1/3 factor is introduced into the PCI timing process to produce the default 33-MHz PCI bus speed.

Certain crossover points in PCI to front-side bus ratios can create stability problems. The most common risky frequencies are those approaching 83 and 124 MHz for the front-side bus. Due to a ½ divider limit at the 83-MHz range, the PCI rate is extended to 41.5 MHz, well beyond its 33-MHz default specification. The 124-MHz front-side bus rate leads to a similar scenario, as the 1/3 divider forces a 41.3-MHz PCI rate.

Some motherboard designs allow users to refine the divider value, but this feature is not common in production-level boards.

PCI components with the highest risk of failure at 40+ MHz are storage drives, especially early-model IDE drives. SCSI drives do not usually exhibit this problem due to their more exacting specifications. Stability issues can often be resolved by lowering the drive-transfer signaling speed by one level. This results in lower bandwidth, though the performance gains realized through overclocking the processor or the front-side bus may negate any loss. Benchmarking utilities are needed to ascertain performance differences.

The AGP Bus

The AGP bus is similarly limited during front-side bus overclocking. Problems again arise at 83 and 124 MHz for nearly all chipset designs. Some motherboard architectures also suffer instability or high failure rates at 100+ MHz due to limitations in early AGP bus implementations. For example, Intel’s popular BX chipset can support proper 133 MHz front-side bus operation for all system buses except the AGP. The BX features only 1/1 and 2/3 AGP divider functions, and thus a 133-MHz front-side bus rate leads to a problematic 88.6-MHz AGP rate.

image

Figure 4-8: AGP bus configuration

Many of the latest AGP graphics accelerators can operate effectively at extended levels, often up to 90 MHz. For maximum stability, it may be necessary to lower AGP

transfer speeds by one level (that is, 4x to 2x) or to disable AGP side-band addressing. Those with older AGP video cards or motherboard-level integrated graphics chipsets need to analyze stability closely through long-term testing. Even if an AGP card seems stable, additional frequency loads can damage the graphics accelerator over time. Failure may come after several weeks of operation or problems may never surface. AGP overclocking is a gamble; it requires extreme care, especially when it introduces problematic front-side bus rates into the graphics pipeline.

USB or IEEE 1394 Firewire connections do not usually suffer under front-side bus overclocking. These well-designed implementations can handle the extended operating frequencies involved. Older buses, like ISA, can be problematic. Systems with peripherals based on such architecture would likely see greater benefit from upgrading than from overclocking.

Stability Through Voltage Increase

Achieving stability at extended operating speeds often requires increasing voltage levels, and sustaining faster processor speeds can demand a greater core voltage. Similarly, faster chipset operating speeds can often be sustained through a bump in input/output voltage. Several of the latest DDR memory-based motherboards also allow manipulation of memory bus voltage levels. This feature was originally implemented to preserve compatibility with early DDR modules, but the ability to change memory voltage levels has led to significant improvements in stability.

Overclocking enthusiasts have exploited the potential for maximizing operating frequencies in this way.

image

Figure 4-9: Voltage monitoring

Any increases in voltage levels are potentially hazardous. Most current .18- and .25- micron processor core architectures can operate within a 10 percent variance from the default specification, but added stresses require extra measures to protect long- term system stability. Cooling plays an integral role in the voltage manipulation process.

Any increases in voltage levels produce additional heat in the core circuitry. While all circuits can cross certain thermal thresholds, additional cooling is often required to prevent damage from temperature variations. Processor coolers, heat transfer compounds, case fans, and case design can all affect the cooling capabilities of a system. Further discussion of choices in these areas can be found in Chapter 8 dedicated to cooling technologies.

A phenomenon called electron migration can lead to system failure as a result of voltage increases. Electron migration results when moving electrons are displaced across integrated circuit trace routes. As fabrication technologies improve, die size becomes critical in determining maximum voltage tolerances. Smaller die sizes produce narrower trace routes, thus reducing the processor’s ability to cope with the stresses of electron migration. As the circuits get smaller, voltage-level tolerances are lowered exponentially. Chapters 6, 7, and 8, detail system-specific information, including information about maximum voltage levels.

LEAVE A REPLY

Please enter your comment!
Please enter your name here