Sponsored Links
-->

Senin, 04 Juni 2018

AMD OverDrive™ Technology for Overclocking CPU and Fan Control | AMD
src: www.amd.com

Not to be confused with overclocking (milometer/odometer)

Overclocking is the configuration of computer hardware components to operate faster than those certified by the original manufacturer, with "faster" defined as clock frequency in megahertz (MHz) or gigahertz (GHz). Generally operating voltages are also improved to maintain operational stability of components at an accelerated speed. Semiconductor devices that operate at higher frequencies and voltages increase power consumption and heat. Overclocked devices may be unreliable or fail entirely if the additional heat load is not removed or the power delivery component can not meet the increased power demand. Many device guarantees state that overclocking and/or over-specification will void any warranty.


Video Overclocking



Overview

The overclocking goal is to get additional performance from the given component by increasing the speed of its operation. Usually, on modern systems, overclocking targets improve the performance of a major chip or subsystem, such as a main processor or graphics controller, but other components, such as system memory (RAM) or system bus (generally on the motherboard), are usually involved. The trade-offs are increased power consumption (heat) and fan noise (cooling) for targeted components. Most components are designed with a security limit to handle operating conditions beyond the control of the manufacturer; Examples are ambient temperature and operating voltage fluctuations. Overclocking techniques generally aim to trade these security limits by setting the device to run at the higher margin end, with the understanding that temperature and voltage must be more closely monitored and controlled by the user. An example is that the operating temperature should be controlled more tightly with increased cooling, since that part would be less tolerant of rising temperatures at higher speeds. The basic operating voltage can also be increased to compensate for unexpected voltage drop and to amplify signal and time signals, since low voltage visits are more likely to cause malfunctions at higher operating speeds.

While most modern devices are fairly tolerant of overclocking, all devices have limited limits, generally for any given voltage most will have the maximum "stable" speed at which they are still operating correctly. Through this speed the device starts giving wrong results, which can cause sporadic malfunctions and behavior in any system depending on it. While in the PC context the usual result is a system crash, a smoother error can be undetected, which for a long time can give unpleasant surprises like data corruption (incorrectly calculated results, or worse write to storage is incorrect) or the system fails only during certain specific tasks (common uses like internet search and word processing seem fine, but any app that wants advanced graphics is corrupting the system).

At this point, an increase in operating voltage of the part can allow more headroom for a further increase in clock speed, but increased voltage can also significantly increase heat output. At some point there will be a limit imposed by the ability to supply devices with sufficient power, the user's ability to cool parts, and maximum voltage tolerance of the device itself before it achieves a destructive failure. Excessive use of excessive voltage and/or cooling can rapidly degrade device performance to the point of failure, or in extreme cases instantly destroy it.

The speeds obtained by overclocking are highly dependent on the applications and workloads run on the system, and what components are being overclocked by the user; benchmarks for different purposes published.

Underclocking

Instead, the main purpose of underclocking is to reduce power consumption and generate device generated heat, with trade-offs being lower clock speeds and performance reductions. Reducing the cooling requirements required to maintain parts at certain operating temperatures has enormous benefits such as decreasing fan speed and speed to allow for quieter operation, and in mobile devices increasing the length of battery life per charging. Some manufacturers underclock battery-powered equipment components to improve battery life, or apply detecting systems when the device is operating under battery power and reduce the clock frequency accordingly.

Underclocking is almost always involved in the last stage of Undervolting, which seeks to find the highest clock speed that will operate stably at a given voltage. That is, when overclocking seeks to maximize clock speed with temperature and power as constraints, underclocking seeks to find the highest clock speed that a device can maintain on a fixed and arbitrary power limit. The given device can operate correctly at stock speed even when attenuated, where underclocking will only be used after further reduction in voltage ultimately destroys part stability. At that point, the user needs to determine whether the last working voltage and speed have satisfactorily lowered the power consumption for their needs - otherwise the performance must be sacrificed, the lower clock selected (below the screen) and testing at the lower voltage will continue from point it out. The lower limit is where the device itself fails to function and/or the support circuit can not reliably communicate with that part.

Underclocking and undervolting are usually attempted if a system needs to operate secretly (like a multimedia player), but if higher performance is desired than that offered by a given low-voltage manufacturer offer, then builders will strive to take on higher performance. the desktop section with higher thermal output of the stock, and see if the processor will run with a lower voltage and voltage in acceptable performance/noise targets for build. Thus it can provide some options for undervolt/underclock standard processor voltages in good "low voltage" applications to avoid paying premium pricing for certified low voltage versions (some low-cost versions are significantly more expensive, and even then are often slower than desktop equivalent), or if better performance is required than that offered by the low-power processors available.

An enthusiastic culture

Overclocking becomes more accessible to motherboard makers who offer overclocking as a marketing feature on their main product line. However, this practice is more embraced by fans than professional users, because overclocking carries the risk of reduced reliability, accuracy, and damage to data and equipment. In addition, most manufacturers' warranties and service agreements do not include overclocked or incidental damages caused by their use. While overclocking can still be an option to increase personal computing capacity, and thus workflow productivity for professional users, the importance of stability testing components thoroughly before hiring them into the production environment can not be overstated.

Overclocking offers several series for overclocking fans. Overclocking enables component testing at speeds that are currently not offered by the manufacturer, or at speeds that are only officially offered on special versions of higher priced products. A common trend in the computing industry is that new technologies tend to debut in the upscale market first, then drip into mainstream performance and markets. If the high-end section is just different from the clock speed increase, an enthusiast can try to overclave the main part to simulate high-end offerings. This can provide insight into how the over-the-horizon technology will work before they are officially available in the mainstream market, which can be very helpful to other users who are considering whether they should plan ahead to buy or upgrade to new features when it is officially released.

Some fans love to build, tune, and "Hot-Rodding" their systems in competitive competition competitions, competing with like-minded users to get high scores in standard computer benchmarks. Others will buy low-cost models from components in a particular product line, and try to overclock that part to match the more expensive model stock performance. Another approach is overclocking older components to try to compensate for increased system requirements and extend useful service life from older parts or at least delay the purchase of new hardware solely for performance reasons. Another reason for overclocking older equipment even if overclocking emphasizes equipment to the point of failure before, only a few are lost because it has depreciated, and will need to be replaced in any case.

Components

Technically every component that uses a timer (or clock) to synchronize its internal operations can be overclocked. Most of the efforts are for computer components but focus on certain components, such as, processors (a.k.a. CPU), video cards, motherboard chips, and RAM. Most modern processors gain their effective operating speed by multiplying the base clock (processor bus speed) with internal multipliers in the processor (CPU multipliers) to reach their final speed.

Computer processors are generally overclocked by manipulating the CPU multiplier if that option is available, but the processor and other components can also be overclocked by increasing the clock speed of the bus clock. Some systems allow additional clock adjustments (such as system clocks) that affect the clock's clock speed, again multiplied by the processor to allow for smoother adjustment of the final processor speed.

Most OEM systems do not expose to user adjustments needed to change processor clock speed or voltage, which precludes overclocking (for warranty and support reasons). The same processor installed on different motherboards that offer customization will allow the user to change it.

Any given component will eventually stop operating reliably over a certain clock speed. Generally the component will indicate some type of malfunctioning behavior or other indication of compromised stability that tells the user that a certain speed is unstable, but there is always the possibility that the component will permanently fail without warning even if the voltage is stored in some preset safe pre-defined value. The maximum speed is determined by overclocking to the first instability point, then accepting the last stable stable setting. Components are only guaranteed to operate correctly until their rated values; beyond that different samples may have different overclocking potential. The endpoint of a given overclock is determined by parameters such as available CPU multiplier, bus splitter, voltage; user ability to manage thermal load, cooling technique; and several other factors of each device itself such as a semiconductor clock and thermal tolerance, interaction with other components and the rest of the system.

Maps Overclocking



Considerations

There are several things to watch out for when overclocking. The first is to ensure that the components are supplied with sufficient power at sufficient voltage to operate at a new clock rate. Supplying power with improper arrangement or applying excess stress can damage the component permanently.

In professional production environments, overclocking may only be used where speed increases justify the necessary expert support costs, the likelihood of reducing reliability, resulting in maintenance and assurance contracts, and higher power consumption. If faster speeds are needed, it is often cheaper when all costs are considered to buy hardware faster.

Cooling

All electronic circuits generate heat generated by electric current movement. Because of the clock frequencies in digital circuits and the increased voltage applied, the heat generated by components running at higher performance levels also increases. The relationship between clock frequency and thermal design power (TDP) is linear. However, there is a limit to the maximum frequency called "wall". To overcome this problem, overclockers increase chip voltage to increase overclocking potential. Voltage increases power consumption and consequently generates heat significantly (proportionally with the square of the voltage in a linear circuit, for example); this requires more cooling to avoid hardware damage due to overheating. In addition, some digital circuits slow down at high temperatures due to the changing characteristics of MOSFET devices. In contrast, overclockers may decide to reduce the chip voltage during overclocking (a process known as undervolting), to reduce heat emissions while performance remains optimal.

The stock cooling system is designed for the amount of power generated during non-overclocked use; overclocked circuits can require more cooling, such as by powerful fans, larger heat sinks, heat pipes and water coolers. Mass, shape, and material all affect the heatsink's ability to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive. Aluminum is more widely used; this has good thermal characteristics, although not as good as copper, and significantly cheaper. Cheaper materials such as steel do not have good thermal characteristics. Heat pipe can be used to increase the conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.

Cooling water brings waste heat to the radiator. Thermoelectric cooling devices that actually cool down using Peltier effect can help with the high thermal power processor (TDP) made by Intel and AMD at the beginning of the twenty-first century. The thermoelectric cooling device creates a temperature difference between two plates by running an electric current through the plate. This cooling method is very effective, but generates significant heat elsewhere that must be taken away, often with convection-based heatsinks or water cooling systems.

Other cooling methods are forced convection and phase transition cooling used in refrigerators and can be customized for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases, such as a record-setting experiment or a one-time experiment rather than daily system cooling. In June 2006, IBM and the Georgia Institute of Technology jointly announced a new record in silicon-based chip rate (transistor level can be activated instead of CPU clock rate) above 500 GHz, which is done by cooling the chip to 4.5 Â ° K (-268.6 Â ° C; -451.6 Â ° F) using liquid helium. The World Record Frequency CPU is 8.794 GHz as of November 2012. This extreme method is generally not practical in the long run, as it requires refilling of the reservoir from the evaporated coolant, and condensation can form on cold components. In addition, the silicon-based field-gate effect transistor (JFET) will be degraded below 100 Â ° K (-173 Â ° C; -280 Â ° F) and eventually stop working or 'freeze' at 40 Â ° C (- 233Ã, Â ° C; -388Ã, Â ° F) since silicon ceases to be a semiconductor, so using a very cool cooler can cause the device to fail.

Sub submersion cooling, used by Cray-2 supercomputers, involves drowning some of the computer systems directly into thermally conductive cold fluids but having low electrical conductivity. The advantage of this technique is that no condensation can be formed on the component. A good immersion fluid is Fluorinert made by 3M, which is expensive. Another option is mineral oil, but dirt as it is in water can cause electricity.

Stability and functional truth

Because overclocked components operate outside the manufacturer's recommended operating conditions, they may work incorrectly, leading to system instability. Another risk is data mute that is misunderstood by an undetected error. Such failures may never be correctly diagnosed and may be mistakenly associated with in-app software bugs, device drivers, or operating systems. Overclocking can permanently damage components sufficient to cause them to make mistakes (even under normal operating conditions) without becoming completely unusable.

A large-scale 2011 field study of hardware errors that caused system crashes for consumer PCs and laptops showed an increase of four to 20 times (depending on CPU manufacturer) in system crashes due to CPU failure overclocked computer over a period of eight months.

In general, overclockers claim that testing can ensure that the overclock system is stable and functioning correctly. Although software is available to test hardware stability, it is generally not possible for private individuals to thoroughly test the functionality of the processor. Achieving good error coverage requires enormous engineering effort; even with all the resources dedicated to validation by the manufacturer, the wrong components and even design errors are not always detected.

Special "voltage tests" can verify only the specific instruction sequence functions used in combination with the data and can not detect errors in the operation. For example, arithmetic operations can produce correct results but incorrect flags; if the flag is unchecked, the error will not be detected.

To further complicate the problem, in process technology such as silicon in the isolator (SOI), the device displays hysteresis - the circuit performance is affected by events in the past, so without careful targeted testing, it is possible to sequence certain status changes working at the overclock rate in one situation but not the other even if the voltage and temperature are the same. Often, overclocking systems that pass through stress tests experience instability in other programs.

In an overclocking circle, "stress test" or "torture test" is used to check the correct component operation. This workload is chosen because it loads a very high load on interest components (eg graphical intensive applications to test video cards, or different math-intensive applications to test common CPUs). Popular voltage tests include Prime95, Everest, Superpi, OCCT, AIDA64, Linpack (via LinX and IntelBurnTest GUI), SiSoftware Sandra, BOINC, Intel Thermal Analysis Tools and Memtest86. The hope is that functional issues-precision with overclocked components will appear during this test, and if no errors are detected during testing, the component is then considered "stable". Because error coverage is important in stability testing, testing is often run for long periods of time, hours or even days. Computer overclocks are sometimes depicted using the number of hours and stability programs used, such as "12 hour stable prime".

Factors allow overclocking

Overclockability arises in part because of the economics of the CPU-making process and other components. In many cases components are made with the same process, and tested after the creation to determine the actual maximum rating. The components are then marked by the rank selected by the market requirements of the semiconductor manufacturer. If the manufacturing output is high, components rated higher than required can be produced, and the manufacturer may mark and sell higher performing components as a lower value for marketing reasons. In some cases, the actual maximum rating of a component may exceed even the highest-ranking components sold. Many devices sold at lower ratings can behave in all ways as being ranked higher, while in the worst operation at higher rankings may be more problematic.

In particular, higher clocks should always mean greater heat dissipation, since semiconductors that are set to high should be disposed of more frequently. In some cases, this means that the main disadvantage of the overclocked part is that more heat is being spent than the maximum published by the manufacturer. Pentium architect Bob Colwell called overclocking "unbridled experiments in better system operation than the worst".

Measuring the overclocking effect

Benchmarks are used to evaluate performance, and they can become a kind of "sport" in which users compete for the highest score. As discussed above, functional stability and validity can be compromised when overclocking, and meaningful benchmark results rely on correct benchmark execution. Because of this, benchmark scores may be eligible with a record of stability and truth (eg overclockers can report a score, noting that the benchmark only runs to completion 1 in 5 times, or that misplaced execution signs such as display corruption are seen when running benchmarks). The widely used stability test is Prime95, which has a default error check that fails if the computer is unstable.

By using only benchmark scores, it may be difficult to assess the difference in overclocking on the overall performance of a computer. For example, some benchmarks only test one aspect of the system, such as memory bandwidth, without considering how higher clock rates in this aspect will improve overall system performance. Regardless of demanding applications such as video encoding, high demand databases and scientific computing, memory bandwidth is usually not an obstacle, so large increases in memory bandwidth may not be visible to users depending on the application being used. Other benchmarks, such as 3DMark, try to replicate game conditions.

Overclocking to 7GHz takes more than just liquid nitrogen
src: o.aolcdn.com


Manufacturers and overclocking vendors

Commercial system builders or component retailers sometimes overclock to sell items with higher profit margins. The seller earns more money by overclocking the lower valuable components that are found operating properly and selling the equipment at the appropriate price for higher rated components. Although the equipment usually operates correctly, this practice can be considered fraudulent if the buyer does not realize it.

Overclocking is sometimes offered as a service or feature that is legitimate to consumers, where manufacturers or retailers test the ability to overclock processors, memory, video cards, and other hardware products. Some video card manufactures are now offering factory accelerator versions for their graphics accelerators, complete with warranty, usually at intermediate prices between standard and non-overclocked products with higher performance.

It is speculated that manufacturers are implementing overclocking prevention mechanisms such as CPU locking to prevent users from buying goods at lower prices and overclocking. These measures are sometimes marketed as consumer protection benefits, but are often criticized by buyers.

Many motherboards are sold, and advertised, with extensive facilities for overclocks that are implemented in hardware and controlled by BIOS settings.

Overclocking to 7GHz takes more than just liquid nitrogen
src: s.aolcdn.com


CPU lock

CPU locking is the process of setting the CPU clock permanently. The AMD CPU is opened in the initial edition of the model and locked in the next edition, but almost all Intel CPUs are locked and the latest models are very resistant to unlocking keys to prevent overclocking by users. The AMD vessel opens CPUs with their Opteron, FX, Ryzen and Black Series ranks, while Intel uses "Extreme Edition" and "X-Series monikers." Intel usually has one or two Extreme Edition CPUs on the market as well as X series and K series analogue CPUs with AMD's Black Edition. AMD has the majority of their desktop ranges in the Black Edition.

Users unlock the CPU to allow underclocking, overclocking, and front side bus speed (on older CPUs) compatibility with certain motherboards, but unlocking cancels the manufacturer's warranty and errors can disable or destroy the CPU. Locking chip clock multipliers does not necessarily prevent users from overclocking, as front-side bus speeds or PCI multipliers (on newer CPUs) can still be changed to provide improved performance. AMD Athlon and Athlon XP CPUs are generally opened by connecting bridges (jumper-like points) at the top of the CPU with conductive paint or pencil. Other CPU models (can be determined by serial number) require different procedures.

Raising the front-side bus or northbridge/PCI clock can overclock the locked CPU, but this wastes a lot of damaged system frequencies, since RAM and PCI frequencies are also modified.

Overclocking components can only be useful if a component is on a critical path for a process, if it is a bottleneck. If disk access or Internet connection speed limits the speed of a process, a 20% increase in processor speed will not be noticed. Overclocking the CPU will not benefit the game which is limited by the speed of the graphics card.

While overclocking that causes instability is not a problem, undetectable errors are occasionally a serious risk for applications that must be error-free, such as scientific or financial applications.

Overclocking ¿cómo aumentar la potencia del hardware de mi PC?
src: www.adslzone.net


Graphics

Graphics cards can be overclocked. There are utilities to achieve this, such as EVGA's Precision, RivaTuner, AMD Overdrive (only on AMD cards), MSI Afterburner, Zotac Firestorm on Zotac cards, and PEG Link Mode on the Asus motherboard. Overclocking the GPU will often result in marked improvements in performance in synthetic benchmarks, usually reflected in game performance. It is sometimes possible to see that the graphics card is pushed beyond its limit before permanent damage is done by observing the artifacts on the screen. Two discriminated "alarm bells" are widely understood: the flickering and random triangles appearing on the screen are usually related to overheating problems on the GPU itself, while white, blinking dots appearing randomly (usually in groups) on the screen often mean that RAM card is overheating. It is common to experience one of the problems when overclocking the graphics card; both symptoms at the same time usually means that the card is strongly pushed beyond the heat, clock rate, or voltage limit (if seen when not overclocked they show the card is broken.) If the clock speed is excessive but without overheating different artifacts. There are no general rules, but usually if the core is pushed too hard, dark circles, or blobs appear on the screen and overclocking the video memory beyond its limit usually results in the application or the entire operating system crashes. After the video setting reboots back to the default values ​​stored in the graphics card firmware, and the maximum clock rate of a particular card is now known.

Some overclockers apply potentiometers to the graphics card to adjust the voltage manually (which voids the warranty). This results in much greater flexibility, because overclocking software for graphics cards can seldom adjust the voltage. Increased voltage can damage the graphics card.

Alternative

Blinking and unlocking can be used to improve the performance of a video card, without technical overclocking.

Blinking refers to the use of different card firmware with the same core and compatible firmware, effectively making it a higher model card; it can be difficult, and may not be changed. Sometimes standalone software to modify firmware files can be found, e.g. NiBiTor (GeForce 6/7 series is highly appreciated in this aspect), without using the firmware for a better model video card. For example, video cards with 3D accelerators (mostly, in 2011) have two voltage and clock level settings, one for 2D and one for 3D, but are designed to operate with three voltage stages, the third being in a the place between the two previously mentioned, serves as a fallback when the card is overheated or as an intermediate stage when switching from 2D to 3D mode. Therefore, it can be wise to set this middle stage before overclocking "seriously", especially because of this fallback capability; the card can be lowered to this clock rate, reduced by some (or sometimes several dozen, depending on the setting) percent of its efficiency and cools, without leaving the 3D mode (and afterwards returning to the desired high performance clock and voltage settings).

Some cards have capabilities that are not directly connected with overclocking. For example, Nvidia's GeForce 6600GT (AGP flavor) has a temperature monitor used internally by the card, not visible to the user if a standard firmware is used. Modifying the firmware can display the 'Temperature' tab.

Unlocked refers to enabling additional pipe networks or pixel shaders. The 6800LE, 6800GS and 6800 (AGP models only), Radeon X800 Pro VIVO are some of the first cards to benefit from unlocking. While this model has 8 or 12 pipes enabled, they share the same 16x6 GPU core as the 6800GT or Ultra, but the pipeline and shader outside the specified are disabled; GPUs can be fully functional, or may be found to have errors that do not affect operations on lower specifications. A fully functional GPU can be successfully unlocked, although it is impossible to verify that an error has not been found; in the worst case the card can become unusable permanently.

RAM memory DDR2 overclocking failure - YouTube
src: i.ytimg.com


History

The first overclocked processor was commercially available in 1983, when AMD sold the Intel 8088 CPU overclocked version. In 1984, some consumers overclocked IBM's Intel 80286 CPU by replacing the crystal clock.

Overclocking processor 3D illustration â€
src: st3.depositphotos.com


See also


How to Become a Professional Overclocker | bit-tech.net
src: images.bit-tech.net


References

Notes
  • Colwell, Bob (March 2004). "The Zen of Overclocking". Computer . 37 (3): 9-12. doi: 10.1109/MC.2004.1273994.

What are the benefits and drawbacks of overclocking? - PCMech
src: www.pcmech.com


External links

  • Closed inside
  • How to Unclip PC, WikiHow
  • Overclocking guide for Apple iMac G4 main logic board

Overclocking and benchmark databases

  • OC The database of all PC hardware over the last decade (apps, mods, and more)
  • HWBOT: Overclocking League Worldwide - Competition and overclocking data
  • Comprehensive OC CPU Database
  • Segunda Convencion Nacional de OC: Overclocking Extremo by Imperio Gamer
  • Tool for overclock

Source of the article : Wikipedia

Comments
0 Comments