March 28, 2024
Intel Processors over the Years -featured

Intel Corporation has been a major player in the computer industry for the last 55 years. Having been established in California long before it was known as the Silicon Valley of the globe, Intel is currently the biggest semiconductor chip producer. The little items that support this global tech giant’s huge statistics, including as its 120,000 workers and $213 billion net worth, are semiconductor chips, which act as computer Intel Processors

Computer systems wouldn’t function without CPUs. The development of the worldwide computer sector, the expansion of the internet, and our current dependence on cloud services have all been largely attributed to Intel. While the history of Intel is widely known, that of its chips is less well understood.

Here is a timeline of Intel processor history, beginning with the first processor that was commercially accessible, to honour the creation of goods that have really altered the course of human history.

1971-81: The 4004, 8008 and 8800

The 4004, which came in a 16-pin ceramic dual in-line packaging, was the first full CPU to be housed on a single chip. When the 4004 was first introduced, its clock frequency was 108 kHz (and scaled up to 740 kHz). The 4004 contained 2,300 transistors and produced performance of 0.07 MIPS using a 10 m (10,000 nm) technology.

With 3,500 transistors and a clock speed of 0.5 to 0.8 MHz, the 8-bit 8008 superseded the 4004 in 1972. It was principally utilized in the TI 742 computer. The 8080 came next in 1974 and had 4,500 transistors operating at up to 2 MHz in a 6,000 nm chip. It rose to fame for its employment in the Boeing AGM-86 cruise missile and the Altair 8800.

1978-82: iAPX 86 (8086), 8088 and 80186 (16-bit)

The 8086, often referred to as the iAPX 86, was Intel’s first 16-bit commercial CPU and is regarded as the component that started the x86 processor era. The 8086 was clocked from 5 to 10 MHz and attained up to 0.75 MIPS in computers like the IBM PS/2. It had 29,000 transistors constructed in a 3,000 nm architecture.

The 8088 (5-8MHz), the original PC’s processor, was included with the IBM 5150. It was functionally similar to the 8086 except for its internal 8-bit bus. The 80186 CPU, which was introduced by Intel in 1982 and was based on the 8086, achieved more than 1 MIPS at a 6 MHz clock speed and was manufactured in 2,000 nm. One of the first PCs to utilize the 80186 was the Tandy 2000.

1981: iAPX 432

One of the few Intel CPU ideas to fail was the iAPX 432; Intel no longer mentions it. The i860/i960 processor from the early 1990s and the highly integrated Timna processor from 2000 are two more unfortunate processor ideas from the future.

The 432 was Intel’s first 32-bit design and was unfathomably complicated for its time when it was initially introduced in 1981. It had hardware-based multitasking and memory management functions.

The 4-8 MHz 432 was intended for high-end systems, but it failed because it was more costly to make and slower than the newly developed 80286 architecture.

The 8086 series was intended to be replaced by the 432 at first, but the project was abandoned in 1982.

1982: 80286

The Intel 80286 launched with extensive protective features and memory management. In 1991, it was capable of more than 4 MIPS performance at clock rates of up to 25 MHz. This CPU was widely used in clones of the IBM-PC AT and AT PC. There were 134,000 transistors on the device, which used a 1,500 nm fabrication process.

The 80286 is regarded as one of Intel’s most economically efficient processors ever, with the biggest performance improvement over its predecessor. Only the new Atom CPU, which debuted in 2007, according to Intel, is about as cost-effective as the 80286 from 25 years ago.

1985-94: 386 and 376

The 386DX CPU’s introduction in 1985 marked the start of the 32-bit era. The CPU achieved up to 11.4 MIPS with 275,000 transistors (1,500 nm) and clock rates ranging from 16 to 33 MHz.

In 1988, Intel released the 1,000 nm 386SX, which was aimed at mobile and low-cost desktop computer systems and featured a shorter 16-bit bus. The data bus was reduced to 16 bits in order to streamline the circuit board architecture and save costs, even though the 386SX’s underlying capabilities remained fully 32-bit. Additionally, the 386SX’s address bus had just 24 pins linked to it, which essentially restricted it to accessing 16 MB of memory, though this was not crucial at the time.

Due to early issues with the i387 coprocessor not being ready for manufacturing in time for the 80386, both chips were left without a math coprocessor and were forced to fall back on the 80287 until the 80387 was made available on the market.

In 1990, Intel released the 386SL, a fully integrated laptop chip with an on-chip cache, bus, and memory controller. The CPU operated between 20 and 25 MHz and contained 855,000 transistors. The 376/386 processor family was completed with the 386EX (1994) and the 376 (1989), both for embedded systems.

Due to market demand for the chip to be used in embedded systems and the chip’s widespread usage by the aerospace sector, Intel continued to produce the 80386 series even though it lost its viability as a personal computer CPU in the early 1990s until September 2007.

1989: 486 and i860

The 486 helped Intel reach its highest growth stage and was created under the direction of Pat Gelsinger, the former CEO of VMware. The 486DX was introduced as a 1,000 nm and 800 nm design with 25 to 50 MHz, 1.2 million transistors, and 41 MIPS. In 1991, the 16 to 33 MHz 486SX (a 486DX with a disabled math coprocessor) was released.

In 1992, Intel released an upgrade known as the 486DX2 (SX2) with up to 66 MHz, while the 486SL was made available for laptops as an improved 486SX (up to 33 MHz, 800 nm, 1.4 million transistors). The 486DX4, which had a maximum speed of 100 MHz and was the last model in the 486 series, was touted as a cost-effective alternative for individuals who did not want to spend more money on the new Pentium computers. The DX4 contained 1.6 million transistors, a 600 nm manufacturing technology, and a 70.7 MIPS rating.

The i860, Intel’s second significant effort at the high-end computer market and entry into the RISC CPU competition, was released in 1989. Early in the 1990s, the i860 and i960 were discontinued due to their failure.

1993: Pentium (P5, i586)

In 1993, the first Pentium was released. The Pentium moniker has survived despite 2005 reports that Intel will do away with it in favour of the new Core brand. The trademark was purportedly chosen by Intel in order to be able to defend the trademark against AMD, who also provided CPUs with the 486 label. The brand is a significant element of Intel’s history and a break from the 286/386/486 processor numbers.

The P5 Pentium was introduced in 1993 with 60 MHz and offered in 1996 with up to 200 MHz (P54CS). The number of transistors increased to 3.3 million in the 1996 350 nm design from the initial 3.1 million in the 800 nm design. The P55C was released in 1997 and added MMX (multimedia extensions) to the CPU, increasing its transistor count to 4.5 million and clock speed to 233 MHz. The Pentium MMX for mobile devices reached 300 MHz and was still available in 1999.

1994-99: Bumps in the road

Although Intel has added many great CPUs and architectures to its range over the years, there have been a few roadblocks along the way.

A defect in the Intel P5 Pentium floating-point unit that impacted multiple generations of the original Pentium processor was found in 1994 by a professor at Lynchburg College. The Pentium FDIV flaw causes some division operations to output inaccurate decimal values, which might lead to problems in sectors like mathematics and engineering where exact answers are required.

Despite being uncommon, Byte magazine predicted that around 1 in 9 billion divisions would result in inaccurate results. The issue, according to Intel, was caused by missing entries in the lookup table utilized by the processor’s floating-point division logic.

The Pentium III CPU, the first x86 processor to have a special ID number known as the PSN, or processor serial number, was introduced by Intel in 1999. If the CPUID instruction was not used by the user to deactivate the PSN in the BIOS, applications would be able to access the PSN with ease.

Following its discovery, Intel came under criticism from a variety of organizations, including the European Parliament, who raised privacy concerns regarding the PSN’s potential use by surveillance organizations to identify specific persons. Later, Intel eliminated the PSN capability from all of their CPUs, including the Pentium IIIs built on the Tualatin architecture.

1995: Pentium Pro (P6, i686)

The Pentium Pro was a widely misunderstood chip when it first came out. Many people thought the P5 will be replaced by the Pro. The Pentium Pro, meanwhile, was designed to handle tasks characteristic of servers and workstations as a forerunner to the Pentium II Xeon.

In addition to what its name suggests, the Pentium Pro’s architecture was distinct from that of conventional Pentiums and could handle things like out-of-order execution. The Pentium Pro featured a new architecture and a 36-bit address bus that could accommodate up to 64 GB of memory.

The Pentium Pro contained 5.5 million transistors, was constructed using a 350 nm process, and was available in a number of variations with clock rates ranging from 150 to 200 MHz. Its most well-known use was as an integral part of the ASCI Red supercomputer, the first to surpass the 1 teraflop performance ceiling.

1997: Pentium II and Pentium II Xeon

The sixth-generation P6 architecture served as the foundation for the consumer-oriented Pentium II processor. It was the first Intel CPU to be shipped in a slot module rather than a socket device. The Pentium II carried over the MMX instruction set that was introduced with the Pentium and had 2 million more transistors (7.5 million) than the P6, considerably increasing 16-bit execution, which was a difficulty in the first P6 release.

The 350 nm Klamath core found its way into the Pentium II (233 and 266 MHz). In 1998, Deschutes made its debut as a shrink with clock rates up to 450 nm. As an alternative upgrade for the Pentium Pro, it was also available as Pentium II Overdrive. The Dixon and Tonga cores from the 250 nm generation were used in mobile Pentium II CPUs.

The Deschutes core was also made available by Intel as a Pentium II Xeon with a bigger cache and support for multiple processors in the same year.

1998: Celeron

Celerons are based on the company’s current CPU technology, but they often have significant downgrades, including less cache memory, making them “good enough” processors for the most fundamental PC tasks. Due to their existence, Intel is able to compete in the low-end PC market.

The 250 nm Covington core for desktops and the 250 nm Mendocino core (19 million transistors, including L2 on-die cache) for laptops served as the foundation for the first generation of Celeron processors. On the desktop, the CPUs ranged from 266 to 300 MHz, while on mobile devices, they reached 500 MHz. They were updated far into the Pentium III’s existence. Celerons of today are built on Sandy Bridge architecture.

1999: Pentium III and Pentium III Xeon

The Pentium III, which was introduced in 1999, was Intel’s first competitor in AMD’s quest towards gigahertz. Early in 2000, Transmeta’s low-power challenge was greeted with resistance from the CPU. The chip’s 250 nm Katmai core was used when it was first introduced, then it was rapidly scaled down to 180 nm with Coppermine and Coppermine T, and then 130 nm with the Tualatin core.

Due to the integrated L2 cache, the transistor count increased from 9.5 million in Katmai to 28.1 million in the succeeding cores. With Tualatin, the clock speed increased from 450 MHz to 1,400 MHz over time. The business had to withdraw its gigahertz processors and re-release them after receiving criticism for pushing out the first models to compete with AMD’s Athlon.

The release of the Mobile Pentium III in 2000, which featured SpeedStep and the ability to scale the processor clock speed based on operating mode, is also important from the customer perspective. Many still hold the opinion that the Mobile Pentium III would not have been launched without the pressure of Transmeta, which was infamous for hiring Linus Torvalds, the creator of Linux, one day before the unveiling of the Transmeta Crusoe chip.

The last Xeon processor associated with the Pentium name was the Pentium III Xeon. In 1999, the chip and Tanner core were launched. Intel controversially launched the PSN along with the Pentium III. As a result of several privacy concerns, Intel subsequently disabled the functionality and decided not to include it in next CPUs.

2000: Pentium 4

The Pentium 4 may have started Intel on the road to its most radical change in corporate history. The chip’s Netburst architecture, introduced in 2000 with the 180 nm Willamette core (42 million transistors), was designed to grow with clock speed; Intel anticipated that the foundation would enable the firm to reach rates of more than 20 GHz by 2010. Netburst, however, had fewer applications than first anticipated, and by 2003, Intel was aware that increased clock rates were causing a problem with current leakage and power consumption.

With 1.3 and 1.4 GHz at launch, Netburst was upgraded to 2.2 GHz in 2002 with the 130 nm Northwood core (55 million transistors), then to 3.8 GHz in 2005 with the 90 nm Prescott core (125 million transistors). In 2003, Intel also released the first Gallatin-based Extreme Edition CPUs.

With the introduction of Mobile Pentium 4-M processors, Pentium 4E HT (hyperthreading) processors that supported a virtual second core, and Pentium 4F processors with the 65 nm Cedar Mill core (Pentium 4 600 series) in 2005, the Pentium 4 series become more complex over time.

Tejas was supposed to be Intel’s replacement for the Pentium 4 series, but the company shelved the project when it became evident that Netburst wouldn’t be able to operate over 3.8 GHz. Core, the architecture that came next, was a sharp swing to CPUs that were far more efficient and had a stringent power limit, which placed Intel’s gigahertz machine in reverse.

2001: Xeon

Based on the Netburst architecture of the Pentium 4, the first Xeon that did not have the Pentium name came with the 180 nm Foster core. It was offered with clock speeds ranging from 1.4 to 2 GHz.

When Intel extended Xeon to a complete range of UP and MP processors with the 90 nm Nocona, Irwindale, Cranford, Potomac, and Paxville cores, as well as the 65 nm Dempsey and Tulsa cores, in 2006, the Netburst architecture was still in use.

The Netburst chips’ high power consumption, which was a problem shared by Intel’s desktop CPUs, compelled the company to change the design and marketing of its processors. The dual-core Dempsey CPU, which had 376 million transistors and a clock speed of up to 3.73 GHz, killed the Netburst Xeons.

Today’s Xeons continue to be built on the same technological platform as desktop and mobile CPUs, but Intel keeps them inside a limited power envelope. The first example of this new concept was the dual-core Woodcrest chip from 2006, which was a desktop Conroe processor version.

The latest generation of Xeons are built using Westmere processors and the 32 nm Sandy Bridge and Sandy Bridge EP architectures. The CPUs contain up to ten cores, 3.46 GHz clock rates, and up to 2.6 billion transistors.

2001: Itanium

Even though the Itanium was Intel’s most misunderstood chip, it endured for a very long time. Although it is conceptually similar to the i860 and iAPX 432, it has gained some significant support and has not yet been cut. The CPU, which was introduced as Intel’s first 64-bit processor, was thought to represent the company’s broad vision for a 64-bit platform. The Itanium, on the other hand, struggled in the 32-bit arena and received harsh criticism for this reason.

A mainframe processor with 733 MHz and 800 MHz clock speeds and 320 million transistors—more than six times the number of a desktop Pentium at the time—Itanium was introduced in 2001 with the 180 nm Merced core.

In 2002, Intel released the Itanium 2 (180 nm McKinley core, 130 nm Madison, Deerfield, Hondo, Fanwood, and Madison cores), and the Itanium 9000 (90 nm Montecito and Montvale cores, 65 nm Tukwila core, with a large 24 MB on-die cache and more than 2 billion transistors) wasn’t upgraded until 2010.

2002: Hyper-Threading

The first contemporary desktop CPU featuring simultaneous multithreading (SMT), dubbed Intel Hyper-Threading (HT) Technology, was launched by Intel in 2002. The Prestonia-based Xeon processors from Intel were the first to use HT Technology, followed by the Northwood-based Pentium 4 CPUs. By enabling one thread to continue while the other is delayed, generally because of a data requirement, the operating system is able to operate two threads concurrently.

At the time, Intel stated that the performance increase over a Pentium 4 without hyperthreading might reach 30%. Under our earlier experiments, we’ve shown that, in certain circumstances, a hyperthreaded 3 GHz CPU may outperform a non-hyperthreaded 3.6 GHz chip in terms of performance. Itanium, Pentium D, Atom, and Core i-Series CPUs are just a few of the processors from Intel that continue to support hyperthreading.

2003: Pentium M

Mobile computers were the intended market for the Pentium M 700 series, which was introduced in 2003 with a 130 nm Banias core. It carried the guiding principles of an Intel brand that no longer prioritised clock speed for its CPUs but rather power efficiency. The CPU was created by an Israeli design team for Intel, under the direction of Mooly Eden, a longtime executive with the company.

Banias reduced its clock rates from the 2.6 GHz of the Pentium 4 Mobile to between 900 MHz and 1.7 GHz. The processor’s TDP rating was just 24.5 watts, far lower than the Pentium 4 chip’s 88 watts. Dothan, a 90 nm decrease, reduced the thermal design power to 21 watts. Dothan has 140 million transistors and 2.13 GHz clock rates.

Yonah, which was introduced in 2006 as a Core Duo and Core Solo processor but was unrelated to the Intel Core microarchitecture, was Dothan’s immediate successor. The 4004, 8086, and 386 are compared to the Banias core in terms of how they affected Intel.

2005: Pentium D

Intel’s first dual-core CPU was the Pentium D. The first iteration, which was launched as the Pentium D 800 series, was still based on Netburst and had a 90 nm Smithfield core (comprised of two Northwood cores). It was replaced by the 65 nm Presler twin core (with two Cedar Mill cores).

Both CPUs were also produced as Extreme Editions by Intel, which had a maximum clock speed ceiling of 3.73 MHz and a 130 watt power consumption, the highest ever for an Intel consumer desktop processor (some server processors went up to 170 watts). Prescott has 376 million transistors compared to 230 million in Smithfield.

2005-09: Terascale Computing Research Program

Around 2005, Intel launched its Tera-Scale Computing Research (TSCR) programme to explore ways to improve communication between processors and to solve the many difficulties associated with scaling computers beyond four cores. The Teraflops Research Chip and the Single-Chip Cloud Computer (SCC), two major products from the TSCR programme, made substantial contributions to Intel’s Xeon Phi range of coprocessors.

The 80-core Teraflops Research Chip, often known as Polaris, was created via the TSCR initiative. The device has capabilities including sleeping-core technology, twin floating-point engines, and 3D memory stacking. The chip’s objectives were to construct a semiconductor that could provide a teraflop of computing capability and to test successful scaling beyond four cores on a single die.

The 48-core SCC processor was created under the TSCR programme. The SCC chip was designed with the goal of creating a chip having many sets of distinct cores that could interact with one another directly, much as servers do in a data centre.

The chip has 24 tiles, each with two Pentium cores and 16 KB of cache, arranged in a 4 x 6 two-dimensional mesh, totaling 48 Pentium cores. Performance is greatly enhanced by the use of the tiles, which enable the cores to interact with one another rather than transmitting and receiving data from the main memory.

2006: Core 2 Duo

Intel’s response to AMD’s Athlon X2 and Opteron CPUs, which were popular at the time, was the Core 2 Duo. The Core microarchitecture was introduced with the 65 nm Conroe (Core 2 Duo E-6000 series) for desktops, Merom (Core 2 Duo T7000 series) for mobile devices, and Woodcrest (Core 2 Duo E-6000 series) for servers (Xeon 5100 series). Intel released quad-core versions shortly after (Kentsfield Core 2 Quad series for the desktop, Clovertown Xeon 5300 series for servers).

One of Intel’s biggest restructurings and a major repositioning of the business came before the Core microarchitecture. While Conroe was being created, Intel positioned its surviving Pentium and Pentium D CPUs to force AMD into a historic pricing war in 2005 and 2006, while the Core 2 Duo chip reclaimed the performance advantage over AMD in 2006. Conroe was introduced as a processor with 291 million transistors and clock rates ranging from 1.2 GHz to 3 GHz. A 45 nm Penryn downsize was added to the CPUs in 2008. (Yorkfield for quad cores).

The release of Core 2 Duo also saw the beginning of Intel’s tick-tock cadence, which mandates a shrink in odd years and a new architecture in even years. Previously, Intel had tried to provide a die reduction every two years.

2007: Intel vPro

Around 2007, Intel debuted its vPro technology, which is only a marketing name for a collection of hardware-based features found on some of the company’s processors made since that time.

vPro, which is primarily aimed for the corporate market and is sometimes mistaken for Intel’s Active Management Technology (AMT), combines many Intel technologies into one single package, including Hyper-Threading, AMT, Turbo Boost 2.0, and VT-x. A vPro-enabled CPU, vPro-enabled chipset, and a BIOS that supports vPro technology are required for a computer to use vPro technology.

Some of the key technologies that are part of vPro are as follows:

Intel Active Management Technology (AMT)

An array of hardware components known as Intel Active Management Technology (AMT) enables system administrators to remotely access and control a computer even while it is not in use. Basic configuration may be carried out on devices without an operating system or other management tools installed thanks to AMT’s remote configuration technology.

Intel Trusted Execution Technology (TXT)

Using the Trusted Platform Module, Intel Trusted Execution Technology (TXT) checks the legitimacy of a machine (TPM). In order to decide what software may execute based on trust, TXT then creates a chain of trust utilising different measures from the TPM. System administrators are now able to guarantee that sensitive data is only handled on reputable platforms.

Intel Virtualization Technology (VT)

A hardware-based virtualization solution called Intel Virtualization Technology (VT) enables complete isolation between different workloads while allowing them to share a set of resources. VT also reduces some of the performance overhead associated with adopting software virtualization alone.

2008: Core i-Series

The Nehalem microarchitecture and Intel’s 45 nm manufacturing technology were introduced with the Core i3, i5 and i7 CPUs in 2008. The architecture served as the basis for Intel CPUs under the Celeron, Pentium Core, and Xeon brands until it was scaled down to 32 nm (Westmere) in 2010. Westmere grew to a maximum of 2.3 billion transistors, 3.33 GHz clock speed, and eight cores.

2008: Atom

Atom was introduced in 2008 as a CPU designed for nettops and portable internet devices. The first 45 nm single chip had a thermal design power of only 0.65 watts and was marketed in a package with a chipset. As netbooks gained popularity in 2008, the less energy-efficient Diamondville (N200 and N300 series) core outsold the Silverthorne core (Z500 series), which Intel had envisioned as its rival in the ultramobile market, in terms of unit sales.

The first Atom failed to integrate well and found success only in the netbook sector. Even the upgraded Lincroft, which was introduced in 2010 as the Z600, was unable to alter the situation. The 32 nm Cedarview generation is the most recent Atom generation for desktop and netbook applications (D2000 and N2000 series, released in 2011). Intel made an effort to introduce Atom into new markets, such TVs, but failed mostly because Atom was not integrated.

2012 saw the introduction of the Atom SoC and the Medfield core. Since its ARMv5-based Xscale core, which the firm supplied between 2002 and 2005, Intel’s Z2000 series is its first product for devices like phones and tablets.

HD graphics in 2010

2010 saw the release of Intel’s Westmere architecture, which has on-die graphics and is referred to as Intel HD Graphics. The Intel Integrated Graphics included on the motherboard’s Northbridge processor was formerly used by any computer without a standalone graphics card.

The Northbridge chip was completely deleted as part of Intel’s transition from its Hub Architecture design to the new Platform Controller Hub (PCH) architecture, and the integrated graphics hardware was transferred to the same die as the CPU. Contrary to the previous integrated graphics solution, which had a bad reputation for missing features and performance, Intel’s HD Graphics restored integrated graphics’ competitiveness with discrete graphics producers by significantly enhancing performance while using less power.

The low-to-midrange device industry was eventually dominated by Intel HD Graphics, which also gained a significant market share in the mobile device market. The Intel HD Graphics 5000 (GT3) can provide up to 704 GFLOPS of speed and has a 15 watt TDP. It also includes 40 execution units.

As a high-performance variant of HD Graphics, Intel introduced Iris Graphics and Iris Pro Graphics on a select number of its Haswell CPUs in 2013. The Iris Graphics 5100 is similar to the HD Graphics 5000 in most respects, with the exception of its higher TDP of 28 watts, higher maximum frequency of 1.3 GHz, and marginally higher performance of up to 832 GFLOPS.

The Iris Pro Graphics 5200, also known as Crystalwell by Intel, is the first of the company’s integrated solutions to feature its own internal DRAM and comes with a 128 MB cache for jobs with restricted bandwidth. Iris Pro Graphics will replace HD Graphics in the Broadwell-K family of CPUs, according to a late 2013 announcement by Intel.

2010: Many Integrated Core Architecture and Xeon Phi

Around 2010, initial development on Intel’s Many Integrated Core (MIC) Architecture started. It drew on technologies from a number of past projects, including the Teraflops Research Chip, the SCC project, and the Larrabee microarchitecture. Coprocessors, which are specialised processors created to boost computer speed by offloading processor-intensive activities off the CPU, include Intel’s numerous MIC Architecture devices, subsequently known as Xeon Phi.

In May 2010, Intel unveiled Knights Ferry, a PCIe card with 32 cores running at 1.2 GHz and four threads per core, as their first MIC Architecture prototype board. The development board had additional features including 2 GB of GDDR5 memory, 8 MB of L2 cache, a 300 watt power consumption, and speed of more than 750 GFLOPS.

Knights Corner, a modification to Intel’s MIC Architecture, was unveiled in 2011. It contained more than 50 cores per chip and was produced utilising the 22 nm process and Intel’s Tri-Gate transistor technology. Knights Corner, Intel’s first commercial MIC Architecture product, was swiftly embraced by a wide range of supercomputer manufacturers, including SGI, Texas Instruments, and Cray. 2012 saw Intel’s formal rebranding of Knights Corner as Xeon Phi at the Hamburg International Supercomputing Conference.

Knights Landing, Intel’s second-generation MIC Architecture, was unveiled in June 2013. According to Intel, the Knights Landing devices will be constructed utilising a 14 nm technology and up to 72 Airmont cores with four threads each. Each card would also handle up to 384 GB of DDR4 RAM, have 8 to 16 GB of 3D MCDRAM, and have TDPs between 160 and 215 watts, according to information provided by Intel.

The Xeon Phi 3100, Xeon Phi 5110P, and Xeon Phi 7120P are three Xeon Phi devices that are based on the 22 nm technology. The Xeon Phi 3100 has a memory bandwidth of 320 Gbps, a suggested price of about $2,000, and double-precision floating-point performance of more than 1 teraflop. At the top of the scale, the Xeon Phi 7120P has a price tag of about $4,100, 1.2 teraflops of double-precision floating-point capability, and 352 Gbps of memory bandwidth.

Intel SoCs in 2012

Midway through 2012, Intel entered the system on a chip (SoC) market with the release of its Atom SoC line. The first Atom SoCs were just low-power versions of older Atom CPUs, which weren’t very successful when compared to ARM-based SoCs. With the debut of the Baytrail Atom SoCs based on the 22 nm Silvermont architecture in late 2013, Intel SoCs started to gain traction.

The Baytrail chips are full SoCs with all the components required for tablets and laptop computers, much as the recently introduced Avoton server processors. TDPs as low as 4 watts are available. Along with the Atom-based SoCs, Intel made a concerted effort to enter the high-end tablet market in early 2014 by releasing ultralow-power CPUs with the Haswell architecture Y SKU suffix and TDPs of around 10 watts.

Late in 2014, Intel began selling Broadwell-based processors, furthering their foray into the SoC market. These quad-core devices support up to 8 GB of LPDDR3-1600 RAM and have TDPs as low as 3.5 watts.

2013: Core-i Series – Haswell

2013 saw the introduction of the 22 nm Haswell microarchitecture, which took the place of the 2011 Sandy Bridge architecture in Intel’s Core-i line of CPUs.

Haswell was released by Intel along with the Y SKU suffix for their new low-power CPUs intended for ultrabooks and premium tablets (10- to 15-watt TDP). With the Haswell-EP family of Xeon processors, which had up to 5.69 billion transistors and clock rates as high as 4.4 GHz, Haswell scaled up to 18 cores.

In order to address the heat concerns experienced by enthusiasts and overclockers, Intel introduced a refresh of the Haswell series in 2014 dubbed Devil’s Canyon. It included a small increase in clock speeds. Although the Haswell CPU line was completely replaced by the Broadwell die shrink in 2014, which omitted the presence of entry-level desktop CPUs, the architecture was scaled down to 14 nm.

Broadwell in 2015

2015 saw the adoption of the 14 nm architecture as the standard with the release of the fourth generation of contemporary CPUs. Broadwell was 37% smaller than its immediate predecessor after a period of shrinking from 45 nm in 2010 to 22 nm with Haswell. With quicker waking times, battery life might also be increased by 1.5 hours.

Additionally, Broadwell’s two-channel DDR3L-1333/1600 RAM through 1150 LGA sockets boosted graphics performance.

Skylake in 2015

Since 2015, each new generation of Intel processors has included a lake-themed name, similar to how Android used to have brands with dessert-themed names. The first was Skylake, which was released just seven months after Broadwell yet returned a 10% increase in instructions per clock (IPC) because to advances in microarchitecture.

Although speeds could exceed 4 GHz, these processors’ price made them less popular, even despite having a slightly smaller cache than Broadwell. Whereas Broadwell had been utilized in Celeron, Pentium, Xeon, and Core M chips, they were only used in Xeon processors.

Kaby Lake in 2016

In addition to being the first piece of Intel hardware incompatible with Windows 8 or earlier versions, Kaby Lake was notable for being the first Intel CPU to reject the company’s renowned “tick-tock” production and design paradigm.

While IPC numbers remained static, Skylake improvements included faster CPU clock rates and clock speed adjustments. It was included into Core, Pentium, and Celeron CPUs and provided improved 4K video processing, but notably not Xeon. R versions with support for DDR4-2666 RAM were launched with a subsequent Kaby Lake refresh in early 2017.

Ice Lake in 2017

The 10th-generation Ice Lake CPU was 2017’s third processor generation after the Core-based Coffee Lake generation. This was the first CPU architecture to handle Wi-Fi 6 and Thunderbolt 3, introducing a 10 nm technology and demonstrating the trend toward ever-faster transfer rates and connection.

Ice Lake is available on Core and Xeon processors; the SP form, which has a 3.7GHz maximum CPU clock rate and up to 40 cores, was released in April 2021. It employs BGA1526 sockets and has a computational performance of over 1 teraflop

The initial range of Intel Core i3/i5/i7 CPUs from 2019 is still substantially accessible, however Xeon Silver, Gold, and Platinum variants have been available since 2021.

Tiger Lake in 2020

Tiger Lake is the name given to the most current 11th-generation Intel Core mobile CPUs. They have taken the position of Ice Lake mobile CPUs, including variants with two and quad cores. Since Skylake, this is the first CPU to be concurrently offered under the Celeron, Pentium, Core, and Xeon names.

Tiger Lake chips, the third generation of 10 nm CPUs, are created especially for portable gaming computers. While the Core i9-11980HK boasts a maximum boost clock speed of 5 GHz, they both offer refresh rates of 100 fps.

Intel processor timeline

  • 1971-81: The 4004
  • 1978-82: iAPX 86 – 8086, 8088 and 80186 (16-bit)
  • 1981: iAPX 432
  • 1982: 80286
  • 1985-94: 386 and 376
  • 1989: 486 and i860
  • 1993: Pentium (P5, i586)
  • 1994-99: Bumps in the road
  • 1995: Pentium Pro (P6, i686)
  • 1997: Pentium II and Pentium II Xeon
  • 1998: Celeron
  • 1999: Pentium III and Pentium III Xeon
  • 2000: Pentium 4
  • 2001: Xeon, Itanium
  • 2002: Hyper-Threading
  • 2003: Pentium M
  • 2005: Pentium D
  • 2005-09: Terascale Computing Research Program
  • 2006: Core 2 Duo
  • 2007: Intel vPro
  • 2008: Core i-Series, Atom
  • 2010: HD Graphics, Many Integrated Core Architecture and Xeon Phi
  • 2012: Intel SoCs
  • 2013: Core-i Series – Haswell
  • 2015: Broadwell, Skylake
  • 2016: Kaby Lake
  • 2017: Ice Lake
  • 2020: Tiger Lake

Leave a Reply

Your email address will not be published. Required fields are marked *