Highlighted notes while studying Concurrent Data Structures:
DDR3 SDRAM
Source: Wikipedia
Double Data Rate 3 Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR3 SDRAM, is a type of synchronous dynamic random-access memory (SDRAM) with a high bandwidth ("double data rate") interface, and has been in use since 2007. It is the higher-speed successor to DDR and DDR2 and predecessor to DDR4 synchronous dynamic random-access memory (SDRAM) chips. DDR3 SDRAM is neither forward nor backward compatible with any earlier type of random-access memory (RAM) because of different signaling voltages, timings, and other factors.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Highlighted notes while studying Concurrent Data Structures:
DDR4 SDRAM
Source: Wikipedia
Double Data Rate 4 Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR4 SDRAM, is a type of synchronous dynamic random-access memory with a high bandwidth ("double data rate") interface.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
The document describes the specifications and operations of Double Data Rate (DDR) SDRAM memory. It details features like double data rate architecture, burst lengths, CAS latencies, commands like read, write, refresh, and initialization procedures. It provides timing diagrams for different memory operations.
The document describes a memory controller for DDR SDRAM that is implemented using Verilog HDL. DDR SDRAM operates at double the frequency of the processor and transfers data on both the rising and falling edges of the clock, allowing it to have higher bandwidth than SDR SDRAM. The controller generates timing and control signals to properly initialize and refresh the memory and handle read and write operations. Simulation and synthesis of the controller design is done using Xilinx ISE 14.5 software.
DDR - SDRAMs are classified into different types including SDRAM, DDR1, DDR2, DDR3, and DDR4. SDRAM synchronizes itself with the CPU timing to allow for faster memory access. DDR1 allows for higher transfer rates through double pumping of the data bus. DDR2 further increases speeds through lower power usage and internal clock running at half the external clock rate. DDR3 and DDR4 continue to improve speeds and bandwidth through higher data transfer rates and lower voltage requirements. Each new generation is not compatible with previous types due to changes in signaling and interfaces.
DDR3 is an evolution of DDR2 RAM that provides faster speeds, lower power consumption, and other improvements. Key features of DDR3 include higher clock frequencies up to 1600MHz, lower voltage of 1.5V, 8-bit prefetch, on-die termination for better signal quality, and fly-by topology. DDR3 also has read/write leveling to calibrate timing, lower signaling standards for reduced power/noise, and improved routing guidelines.
DDR memory is a type of RAM that allows for increased performance over single data rate memory by facilitating two data transactions per clock cycle without doubling the clock speed. It consists of over 130 signals and uses mode and extended mode registers to control operations. DDR memory comes in SRAM and DRAM varieties, with DRAM being more common due to its lower power consumption and use in main memory, though it requires constant refreshing to prevent data loss.
This document summarizes the key aspects of a DDR2 SDRAM controller, including:
1) It describes the differences between DDR1 and DDR2 memory technologies, such as lower power consumption and higher data rates in DDR2.
2) It provides a block diagram of the main components and I/O signals of a DDR2 SDRAM controller.
3) It explains the basic functionality of a DDR2 SDRAM controller, including initialization, refresh operations, and read and write operations.
Highlighted notes while studying Concurrent Data Structures:
DDR4 SDRAM
Source: Wikipedia
Double Data Rate 4 Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR4 SDRAM, is a type of synchronous dynamic random-access memory with a high bandwidth ("double data rate") interface.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
The document describes the specifications and operations of Double Data Rate (DDR) SDRAM memory. It details features like double data rate architecture, burst lengths, CAS latencies, commands like read, write, refresh, and initialization procedures. It provides timing diagrams for different memory operations.
The document describes a memory controller for DDR SDRAM that is implemented using Verilog HDL. DDR SDRAM operates at double the frequency of the processor and transfers data on both the rising and falling edges of the clock, allowing it to have higher bandwidth than SDR SDRAM. The controller generates timing and control signals to properly initialize and refresh the memory and handle read and write operations. Simulation and synthesis of the controller design is done using Xilinx ISE 14.5 software.
DDR - SDRAMs are classified into different types including SDRAM, DDR1, DDR2, DDR3, and DDR4. SDRAM synchronizes itself with the CPU timing to allow for faster memory access. DDR1 allows for higher transfer rates through double pumping of the data bus. DDR2 further increases speeds through lower power usage and internal clock running at half the external clock rate. DDR3 and DDR4 continue to improve speeds and bandwidth through higher data transfer rates and lower voltage requirements. Each new generation is not compatible with previous types due to changes in signaling and interfaces.
DDR3 is an evolution of DDR2 RAM that provides faster speeds, lower power consumption, and other improvements. Key features of DDR3 include higher clock frequencies up to 1600MHz, lower voltage of 1.5V, 8-bit prefetch, on-die termination for better signal quality, and fly-by topology. DDR3 also has read/write leveling to calibrate timing, lower signaling standards for reduced power/noise, and improved routing guidelines.
DDR memory is a type of RAM that allows for increased performance over single data rate memory by facilitating two data transactions per clock cycle without doubling the clock speed. It consists of over 130 signals and uses mode and extended mode registers to control operations. DDR memory comes in SRAM and DRAM varieties, with DRAM being more common due to its lower power consumption and use in main memory, though it requires constant refreshing to prevent data loss.
This document summarizes the key aspects of a DDR2 SDRAM controller, including:
1) It describes the differences between DDR1 and DDR2 memory technologies, such as lower power consumption and higher data rates in DDR2.
2) It provides a block diagram of the main components and I/O signals of a DDR2 SDRAM controller.
3) It explains the basic functionality of a DDR2 SDRAM controller, including initialization, refresh operations, and read and write operations.
HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM (dynamic random access memory) in GPUs, as well as the server, machine-learning DSP , high-performance computing and networking and client space.
High Bandwidth Memory (HBM) is a high-speed stacked memory interface used in high-performance graphics cards and supercomputers. HBM achieves higher bandwidth than GDDR5 using 3D stacking of DRAM dies and through-silicon vias. The first HBM was produced in 2013, and the technology has since progressed through HBM2, HBM2E, and upcoming HBMnext standards, doubling bandwidth with each generation. HBM is used to provide massive memory bandwidth for applications such as graphics processing and AI.
Highlighted notes while studying Concurrent Data Structures:
DDR SDRAM
Source: Wikipedia
Double Data Rate Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR SDRAM, is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, and DDR4 SDRAM, and soon will be superseded by DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Asynchronous DRAM (ADRAM) is widely used due to its internal architecture and interface to the processor's memory bus. However, ADRAM has slow access times which degrade system performance. Synchronous DRAM (SDRAM) was developed to exchange data with the processor synchronized by an external clock, allowing full processor speed without wait states. Later, Double Data Rate SDRAM and Rambus DRAM were introduced to increase data transfer rates.
Get it right the first time lpddr4 validation and compliance testBarbara Aichinger
JEDEC LPDDR4 Compliance and Validation Testing. Learn about electrical and protocol testing and validation. DDR Memory is in almost all computing devices today.
RAM(Random Access Memory) is a part of computer's Main Memory which is directly accessible by CPU. RAM is used to Read and Write data into it which is accessed by CPU randomly. RAM is volatile in nature, it means if the power goes off, the stored information is lost.
El documento describe la jerarquía de memorias en un computador, incluyendo registros de la CPU, memoria cache, memoria principal, memoria secundaria y memoria auxiliar. La memoria principal se divide en ROM de solo lectura y RAM de acceso aleatorio que almacena datos e instrucciones temporales. La memoria secundaria más común son los discos duros, mientras que la memoria auxiliar incluye CDs, DVDs y discos externos.
This document discusses NAND flash memory, which is used in USB flash drives for portable storage. It describes how NAND flash works, including that it has a controller that sends commands serially to program and read the flash. Issues with NAND flash include bad blocks, long access times since it is not random access, and short lifetimes due to being programmable. Technologies like wear leveling aim to extend the lifetime by distributing writes across blocks.
Double Data Rate Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR SDRAM, is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, and DDR4 SDRAM, and soon will be superseded by DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. Implementations often have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy.[4][5] The interface uses double pumping (transferring data on both the rising and falling edges of the clock signal) to double data bus bandwidth without a corresponding increase in clock frequency. One advantage of keeping the clock frequency down is that it reduces the signal integrity requirements on the circuit board connecting the memory to the controller. The name "double data rate" refers to the fact that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a SDR SDRAM running at the same clock frequency, due to this double pumping.
This document provides an overview of NAND flash memory technology and reliability issues. It begins with introductions to memory technologies and flash applications. The document then discusses NAND flash cell structure and operation, including programming, erasing, and reading. Key reliability issues covered include endurance, data retention, and program interference. The document provides references and outlines potential failure mechanisms and mitigation techniques like error correction codes and wear leveling.
The DDR PHY Interface (DFI) defines the signals, timing parameters, and programmable parameters required to transfer control information and data between the memory controller (MC), PHY, and DRAM devices. DFI allows MC and PHY IP cores developed by different companies to interoperate. It also provides a standardized interface for MC and PHY designs developed by different engineering groups within the same company. The DFI specification supports operating the PHY at higher frequencies than the MC, up to 4x, to enable higher DRAM frequencies and potential performance improvements for the system.
The Intel 8257 is a 4-channel direct memory access (DMA) controller that allows peripheral devices to directly access computer memory at high speeds. It generates memory addresses upon a peripheral request and controls the transfer of data between peripherals and memory without involving the CPU. The 8257 has priority logic to resolve requests from multiple peripherals and can transfer a block of data in a single burst, notifying the CPU when complete via a terminal count output. It represents a significant simplification and reduction in components for transferring data between peripherals and memory in microcomputer systems.
Universal Flash Storage is an upcoming memory specification for use in mobile phones, tablets and other consumer electronics devices.
It is the successor of Embedded Multimedia controller (eMMC) that currently prevails and will be available as storage in on-chip and expandable form (in the form of memory cards).
This document discusses the history and technical details of Serial ATA (SATA) storage interfaces. It covers:
- The evolution of parallel ATA standards over time and their limitations that led to SATA.
- The key benefits of SATA including smaller connectors, higher speeds, and support for multiple devices via point-to-point connections.
- An overview of the SATA architecture and protocol stack, including the physical, link, and transport layers.
- Details of the physical layer such as connectors, cabling, and out-of-band signaling.
- How the link layer implements 8b/10b encoding, scrambling, frame structure, and flow control primitives.
PCIe is a standard expansion card interface introduced in 2004 to replace PCI and PCI-X. It uses serial instead of parallel communication and is scalable, allowing for higher maximum system bandwidth. The presentation discusses the history of expansion card standards leading to PCIe, including ISA, EISA, VESA, PCI, and PCI-X. It also covers key aspects of PCIe such as the root complex, endpoints, switches, lanes, bus:device.function notation, enumeration, and address spaces such as configuration space.
PCI Express is a high-speed serial computer expansion bus standard that was created to replace older standards like PCI, PCI-X, and AGP. It provides dedicated bandwidth to devices through the use of lanes and is commonly used as the interface for graphics cards, hard drives, and other peripherals. PCIe has gone through several generations that have increased its maximum bandwidth. It uses a layered protocol architecture and is designed for compatibility while providing scalable bandwidth and other advantages over older standards.
This document summarizes different types of random access memory (RAM), including static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each type of RAM, such as the use of transistors and capacitors, refresh requirements, packaging, and timing. Key details covered include the differences between SRAM and DRAM, DRAM refresh requirements, DRAM and SDRAM timing diagrams, and how DDR SDRAM transfers data on both clock edges.
This document provides an introduction to a thesis on reducing leakage power in cache memory cells. It discusses how advanced microprocessors require large, low-cost memories that cannot be satisfied by embedded DRAMs or planar DRAMs alone. SRAM is commonly used for cache due to its fast access times. Reducing leakage in even a single cache cell can significantly improve overall system power efficiency since caches are large. The objective is to analyze circuit techniques to reduce leakage in 6T SRAM cache cells and compare a proposed 5T cell. Key topics to be covered include memory concepts, cache overview, leakage sources and reduction techniques, and 5T cell design. Terminology used is also defined.
Flash memory is a type of non-volatile computer storage that can be electrically erased and reprogrammed. It was invented in 1980 by Dr. Fujio Masuoka and is commonly used in USB drives, memory cards, and solid state drives. The presentation discusses the history of flash memory, its uses, types including mobile device memory, compact flash, and USB drives, as well as new developments and the future of the technology.
1) DDR memory technology enables memory subsystems to transfer data at twice the frequency of single data rate memory by transferring data on both the rising and falling edges of the clock. This improves performance but also makes the design and debugging more challenging due to reduced timing margins.
2) Debugging DDR memory modules requires examining components like the PLL to ensure proper clock generation and alignment, termination resistors to optimize timing, and registers to confirm signals are latched within specifications. Tuning elements like feedback capacitors and resistors can help optimize timing.
3) Testing tools are needed to thoroughly evaluate DDR memory, including memory testers, stress tests, and equipment to measure clock signals on DIMMs independently of a system
GDDR4 SDRAM is a type of graphics card memory that was intended to replace GDDR3. In 2005, Samsung developed the first 256-Mbit GDDR4 chip running at 2.5 Gbit/s. GDDR4 introduced technologies like Data Bus Inversion and Multi-Preamble to reduce power consumption and improve performance. While it achieved higher speeds and bandwidth than GDDR3, GDDR4 was quickly replaced by GDDR5 within a year as manufacturers like Qimonda moved directly to the newer standard.
HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM (dynamic random access memory) in GPUs, as well as the server, machine-learning DSP , high-performance computing and networking and client space.
High Bandwidth Memory (HBM) is a high-speed stacked memory interface used in high-performance graphics cards and supercomputers. HBM achieves higher bandwidth than GDDR5 using 3D stacking of DRAM dies and through-silicon vias. The first HBM was produced in 2013, and the technology has since progressed through HBM2, HBM2E, and upcoming HBMnext standards, doubling bandwidth with each generation. HBM is used to provide massive memory bandwidth for applications such as graphics processing and AI.
Highlighted notes while studying Concurrent Data Structures:
DDR SDRAM
Source: Wikipedia
Double Data Rate Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR SDRAM, is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, and DDR4 SDRAM, and soon will be superseded by DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Asynchronous DRAM (ADRAM) is widely used due to its internal architecture and interface to the processor's memory bus. However, ADRAM has slow access times which degrade system performance. Synchronous DRAM (SDRAM) was developed to exchange data with the processor synchronized by an external clock, allowing full processor speed without wait states. Later, Double Data Rate SDRAM and Rambus DRAM were introduced to increase data transfer rates.
Get it right the first time lpddr4 validation and compliance testBarbara Aichinger
JEDEC LPDDR4 Compliance and Validation Testing. Learn about electrical and protocol testing and validation. DDR Memory is in almost all computing devices today.
RAM(Random Access Memory) is a part of computer's Main Memory which is directly accessible by CPU. RAM is used to Read and Write data into it which is accessed by CPU randomly. RAM is volatile in nature, it means if the power goes off, the stored information is lost.
El documento describe la jerarquía de memorias en un computador, incluyendo registros de la CPU, memoria cache, memoria principal, memoria secundaria y memoria auxiliar. La memoria principal se divide en ROM de solo lectura y RAM de acceso aleatorio que almacena datos e instrucciones temporales. La memoria secundaria más común son los discos duros, mientras que la memoria auxiliar incluye CDs, DVDs y discos externos.
This document discusses NAND flash memory, which is used in USB flash drives for portable storage. It describes how NAND flash works, including that it has a controller that sends commands serially to program and read the flash. Issues with NAND flash include bad blocks, long access times since it is not random access, and short lifetimes due to being programmable. Technologies like wear leveling aim to extend the lifetime by distributing writes across blocks.
Double Data Rate Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR SDRAM, is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, and DDR4 SDRAM, and soon will be superseded by DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. Implementations often have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy.[4][5] The interface uses double pumping (transferring data on both the rising and falling edges of the clock signal) to double data bus bandwidth without a corresponding increase in clock frequency. One advantage of keeping the clock frequency down is that it reduces the signal integrity requirements on the circuit board connecting the memory to the controller. The name "double data rate" refers to the fact that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a SDR SDRAM running at the same clock frequency, due to this double pumping.
This document provides an overview of NAND flash memory technology and reliability issues. It begins with introductions to memory technologies and flash applications. The document then discusses NAND flash cell structure and operation, including programming, erasing, and reading. Key reliability issues covered include endurance, data retention, and program interference. The document provides references and outlines potential failure mechanisms and mitigation techniques like error correction codes and wear leveling.
The DDR PHY Interface (DFI) defines the signals, timing parameters, and programmable parameters required to transfer control information and data between the memory controller (MC), PHY, and DRAM devices. DFI allows MC and PHY IP cores developed by different companies to interoperate. It also provides a standardized interface for MC and PHY designs developed by different engineering groups within the same company. The DFI specification supports operating the PHY at higher frequencies than the MC, up to 4x, to enable higher DRAM frequencies and potential performance improvements for the system.
The Intel 8257 is a 4-channel direct memory access (DMA) controller that allows peripheral devices to directly access computer memory at high speeds. It generates memory addresses upon a peripheral request and controls the transfer of data between peripherals and memory without involving the CPU. The 8257 has priority logic to resolve requests from multiple peripherals and can transfer a block of data in a single burst, notifying the CPU when complete via a terminal count output. It represents a significant simplification and reduction in components for transferring data between peripherals and memory in microcomputer systems.
Universal Flash Storage is an upcoming memory specification for use in mobile phones, tablets and other consumer electronics devices.
It is the successor of Embedded Multimedia controller (eMMC) that currently prevails and will be available as storage in on-chip and expandable form (in the form of memory cards).
This document discusses the history and technical details of Serial ATA (SATA) storage interfaces. It covers:
- The evolution of parallel ATA standards over time and their limitations that led to SATA.
- The key benefits of SATA including smaller connectors, higher speeds, and support for multiple devices via point-to-point connections.
- An overview of the SATA architecture and protocol stack, including the physical, link, and transport layers.
- Details of the physical layer such as connectors, cabling, and out-of-band signaling.
- How the link layer implements 8b/10b encoding, scrambling, frame structure, and flow control primitives.
PCIe is a standard expansion card interface introduced in 2004 to replace PCI and PCI-X. It uses serial instead of parallel communication and is scalable, allowing for higher maximum system bandwidth. The presentation discusses the history of expansion card standards leading to PCIe, including ISA, EISA, VESA, PCI, and PCI-X. It also covers key aspects of PCIe such as the root complex, endpoints, switches, lanes, bus:device.function notation, enumeration, and address spaces such as configuration space.
PCI Express is a high-speed serial computer expansion bus standard that was created to replace older standards like PCI, PCI-X, and AGP. It provides dedicated bandwidth to devices through the use of lanes and is commonly used as the interface for graphics cards, hard drives, and other peripherals. PCIe has gone through several generations that have increased its maximum bandwidth. It uses a layered protocol architecture and is designed for compatibility while providing scalable bandwidth and other advantages over older standards.
This document summarizes different types of random access memory (RAM), including static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each type of RAM, such as the use of transistors and capacitors, refresh requirements, packaging, and timing. Key details covered include the differences between SRAM and DRAM, DRAM refresh requirements, DRAM and SDRAM timing diagrams, and how DDR SDRAM transfers data on both clock edges.
This document provides an introduction to a thesis on reducing leakage power in cache memory cells. It discusses how advanced microprocessors require large, low-cost memories that cannot be satisfied by embedded DRAMs or planar DRAMs alone. SRAM is commonly used for cache due to its fast access times. Reducing leakage in even a single cache cell can significantly improve overall system power efficiency since caches are large. The objective is to analyze circuit techniques to reduce leakage in 6T SRAM cache cells and compare a proposed 5T cell. Key topics to be covered include memory concepts, cache overview, leakage sources and reduction techniques, and 5T cell design. Terminology used is also defined.
Flash memory is a type of non-volatile computer storage that can be electrically erased and reprogrammed. It was invented in 1980 by Dr. Fujio Masuoka and is commonly used in USB drives, memory cards, and solid state drives. The presentation discusses the history of flash memory, its uses, types including mobile device memory, compact flash, and USB drives, as well as new developments and the future of the technology.
1) DDR memory technology enables memory subsystems to transfer data at twice the frequency of single data rate memory by transferring data on both the rising and falling edges of the clock. This improves performance but also makes the design and debugging more challenging due to reduced timing margins.
2) Debugging DDR memory modules requires examining components like the PLL to ensure proper clock generation and alignment, termination resistors to optimize timing, and registers to confirm signals are latched within specifications. Tuning elements like feedback capacitors and resistors can help optimize timing.
3) Testing tools are needed to thoroughly evaluate DDR memory, including memory testers, stress tests, and equipment to measure clock signals on DIMMs independently of a system
GDDR4 SDRAM is a type of graphics card memory that was intended to replace GDDR3. In 2005, Samsung developed the first 256-Mbit GDDR4 chip running at 2.5 Gbit/s. GDDR4 introduced technologies like Data Bus Inversion and Multi-Preamble to reduce power consumption and improve performance. While it achieved higher speeds and bandwidth than GDDR3, GDDR4 was quickly replaced by GDDR5 within a year as manufacturers like Qimonda moved directly to the newer standard.
Design, Validation and Correlation of Characterized SODIMM Modules Supporting...IOSR Journals
Abstract : In any computing environment, it is necessary for the processor to have fast accessible RAM that allows temporary storage of data. DDR3- SODIMM module is a key component in the memory interface and is becoming increasingly important in enabling higher speeds. Considering higher bandwidths and speeds more than 1GHz, DDR3 is enabling poses more and more high speed signaling and design challenges. Characterized SODIMM module need to be designed to understand and analyze the impact of SODIMM parameters at higher speeds and thereby define more robust memory interface. This will include simulation, board design, validation and results correlation and involves high speed simulation and validation methodologies. Keywords – Validation, Correlation, DDR3, Characterized SODIMM, Signal Integrity
The document discusses various types of computer memory technologies, including RAM types like DRAM, SRAM, DDR, DDR2, and DDR3. It explains the memory hierarchy from registers to cache to main memory to disks. Key points covered include how DRAM works using capacitors that must be periodically refreshed, advantages of SDRAM over regular DRAM like pipelining commands. Generations of DDR memory are compared in terms of clock speeds, data rates, and other features.
This document provides information about basic computer components and types of computers. It discusses the basic competencies required for computer operations as well as common and core competencies. It then defines what a computer is, its main parts including hardware and software, and types of computers such as laptops, desktops, tablets, and more. The rest of the document describes the basic components of a desktop computer in detail, including the monitor, keyboard, mouse, motherboard, RAM, power supply, CPU, hard disk drive, and optical drive. Memory types such as SIMMs, SDRAM, RDRAM, DDR, DDR2, DDR3, and DDR4 are also explained.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
PowerEdge Rack and Tower Server Masters - AMD Server Memory.pptxNeoKenj
This document provides an overview of AMD server memory options for Dell PowerEdge servers, including:
- Details on 2nd generation EPYC memory configurations and benefits like increased memory speeds and bandwidth
- Examples of memory technologies, capacity options, and population rules for configuring Dell PowerEdge rack and tower servers equipped with AMD EPYC processors
- Charts showing the memory support for different PowerEdge server models, including up to 4TB of memory support on some 2-socket models
This document provides an overview of the design of a dual port SRAM using Verilog HDL. It begins with an introduction describing the objectives and accomplishments of the project. It then reviews relevant literature on SRAM design. The document describes the FPGA design flow and introduces Verilog. It provides the design and operation of the SRAM, and discusses simulation results and conclusions. The proposed 8-bit dual port SRAM utilizes negative bitline techniques during write operations to improve write ability and reduce power consumption and area compared to conventional designs.
The document traces the evolution of computer memory from early vacuum tubes to modern RAM standards. It begins with vacuum tubes and the creation of transistors by Bell Labs. Transistors became the core component of memory, starting with simple latches that could store 1 bit. Dynamic RAM uses transistors and capacitors to store data, while static RAM uses an arrangement of transistors. Standards progressed from SDRAM, which transferred data on clock edges, to DDR RAM which doubled this by transferring data on both the positive and negative clock edges. Later standards like DDR2, DDR3, and DDR4 continued to increase bandwidth and clock rates.
The document describes the memory hierarchy in computers from fastest to slowest: CPU caches (L1, L2, L3), main memory (RAM), virtual memory, and permanent storage (hard disks). L1 cache is built into the CPU and holds frequently used data for very fast access. Main memory (RAM) is where operating systems and active programs are run but is slower than cache. Virtual memory manages RAM use through disk storage. Permanent storage on disks retains data even when powered off but is the slowest to access.
Solid State Drives - Seminar for Computer Engineering Semester 6 - VIT,Univer...ravipbhat
This document summarizes a seminar presentation on solid state drives (SSDs). It introduces SSDs and discusses their memory types, form factors, architecture, and components. It compares SSDs to hard disk drives in terms of startup speed, technical specifications, advantages, and disadvantages. The document outlines SSD maintenance concepts like garbage collection and trim and discusses SSD applications before concluding.
A memory module is a circuit board that holds random access memory (RAM) and plugs into a computer's memory slots. It allows RAM to be easily added or replaced. Common types include DIMMs, SIMMs, and SO-DIMMs. A memory module contains multiple memory chips and connects via pins along one edge.
This document describes the implementation of a DDR SDRAM controller using Verilog HDL. It begins with background on DDR SDRAM and its advantages over SDR SDRAM. It then discusses the design of the DDR SDRAM controller, including its main functional blocks - the control interface module, command module, and data path module. The control interface module contains a finite state machine to generate control signals. The command module contains registers and multiplexers to handle commands. The data path module interfaces with the processor and SDRAM. The controller was simulated and synthesized using Modelsim and Xilinx ISE, with the results shown. In conclusion, the DDR SDRAM controller takes advantage of the high speed and pipelined
IRJET- Design And VLSI Verification of DDR SDRAM Controller Using VHDLIRJET Journal
The document describes the design and verification of a DDR SDRAM controller using VHDL. It discusses the architecture and functional blocks of the DDR SDRAM controller, which includes a SDRAM controller module, control interface module, command module, and data path module. The control interface module decodes commands from the host and tracks refresh requests. The command module generates the appropriate commands to the SDRAM based on the decoded commands and addresses. The data path module handles read and write data transfer operations at double data rate to achieve higher bandwidth compared to SDRAM. The DDR SDRAM controller was implemented in Verilog HDL and simulated and synthesized using appropriate tools.
1. The document discusses EMC's mainframe tape solutions, including the Disk Library for Mainframe products. It provides details on the DLm1000, DLm2000, and DLm6000 products and how they emulate tape drives for mainframe environments.
2. De-duplication and data compression are highlighted as key benefits of EMC's tape-on-disk solutions. These techniques help reduce storage needs and allow for faster replication times.
3. Configuring the virtual tape drives in mainframe operating systems can be done in several ways, including defining them as a manual tape library or real tape devices. EMC provides utilities to help manage the virtual tape environment.
The document discusses 3D memory technologies that can provide alternatives to traditional scaling approaches. It proposes using a shared lithography approach where the same lithography steps are used across multiple memory layers to reduce costs. This approach is already being used successfully in 3D NAND flash memory. The document explores how resistive RAM (RRAM) could potentially be used to build a 3D cross-point memory or 3D 1T-1R memory with shared lithography steps to provide a lower-cost memory solution between DRAM and NAND flash in the memory hierarchy. Significant research is still needed to develop an RRAM-based 3D memory that can meet requirements for endurance, latency, and retention time.
The document discusses different types of RAM and ROM. It describes SRAM, DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, PROM, EPROM, and EEPROM. It provides details on their characteristics such as speed, cost, refresh requirements, packaging, and compatibility. The document also gives tips for selecting memory such as checking the maximum supported size and speed, and ensuring it matches the system board configuration.
This document provides an overview of different types of computer memory, including RAM and ROM. It distinguishes between common RAM types like SDRAM, DDR, DDR2 and DDR3. It also describes different memory packaging standards including SIMMs, DIMMs and RIMMs. Key characteristics of memory like clock speed, data transfer rate and error checking are defined.
This presentation discusses Dynamic RAM (DRAM) and its types. It begins by explaining what RAM is and how it provides faster access for the CPU than the hard disk. It then covers that DRAM is the main memory in computers and must be refreshed periodically to prevent data loss. The main types of DRAM discussed are SDRAM, DDR, RDRAM, and DRAM memory modules. Specific details are provided about the features and operation of each DRAM type. Major memory manufacturers are also listed.
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...Subhajit Sahu
TrueTime is a service that enables the use of globally synchronized clocks, with bounded error. It returns a time interval that is guaranteed to contain the clock’s actual time for some time during the call’s execution. If two intervals do not overlap, then we know calls were definitely ordered in real time. In general, synchronized clocks can be used to avoid communication in a distributed system.
The underlying source of time is a combination of GPS receivers and atomic clocks. As there are “time masters” in every datacenter (redundantly), it is likely that both sides of a partition would continue to enjoy accurate time. Individual nodes however need network connectivity to the masters, and without it their clocks will drift. Thus, during a partition their intervals slowly grow wider over time, based on bounds on the rate of local clock drift. Operations depending on TrueTime, such as Paxos leader election or transaction commits, thus have to wait a little longer, but the operation still completes (assuming the 2PC and quorum communication are working).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting Bitset for graph : SHORT REPORT / NOTESSubhajit Sahu
Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is commonly used for efficient graph computations. Unfortunately, using CSR for dynamic graphs is impractical since addition/deletion of a single edge can require on average (N+M)/2 memory accesses, in order to update source-offsets and destination-indices. A common approach is therefore to store edge-lists/destination-indices as an array of arrays, where each edge-list is an array belonging to a vertex. While this is good enough for small graphs, it quickly becomes a bottleneck for large graphs. What causes this bottleneck depends on whether the edge-lists are sorted or unsorted. If they are sorted, checking for an edge requires about log(E) memory accesses, but adding an edge on average requires E/2 accesses, where E is the number of edges of a given vertex. Note that both addition and deletion of edges in a dynamic graph require checking for an existing edge, before adding or deleting it. If edge lists are unsorted, checking for an edge requires around E/2 memory accesses, but adding an edge requires only 1 memory access.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Experiments with Primitive operations : SHORT REPORT / NOTESSubhajit Sahu
This includes:
- Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
- Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
- Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
- Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...Subhajit Sahu
Below are the important points I note from the 2020 paper by Martin Grohe:
- 1-WL distinguishes almost all graphs, in a probabilistic sense
- Classical WL is two dimensional Weisfeiler-Leman
- DeepWL is an unlimited version of WL graph that runs in polynomial time.
- Knowledge graphs are essentially graphs with vertex/edge attributes
ABSTRACT:
Vector representations of graphs and relational structures, whether handcrafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view.
Starting with a survey of embedding techniques that have been used in practice, in this paper we propose two theoretical approaches that we see as central for understanding the foundations of vector embeddings. We draw connections between the various approaches and suggest directions for future research.
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
https://gist.github.com/wolfram77/54c4a14d9ea547183c6c7b3518bf9cd1
There exist a number of dynamic graph generators. Barbasi-Albert model iteratively attach new vertices to pre-exsiting vertices in the graph using preferential attachment (edges to high degree vertices are more likely - rich get richer - Pareto principle). However, graph size increases monotonically, and density of graph keeps increasing (sparsity decreasing).
Gorke's model uses a defined clustering to uniformly add vertices and edges. Purohit's model uses motifs (eg. triangles) to mimick properties of existing dynamic graphs, such as growth rate, structure, and degree distribution. Kronecker graph generators are used to increase size of a given graph, with power-law distribution.
To generate dynamic graphs, we must choose a metric to compare two graphs. Common metrics include diameter, clustering coefficient (modularity?), triangle counting (triangle density?), and degree distribution.
In this paper, the authors propose Dygraph, a dynamic graph generator that uses degree distribution as the only metric. The authors observe that many real-world graphs differ from the power-law distribution at the tail end. To address this issue, they propose binning, where the vertices beyond a certain degree (minDeg = min(deg) s.t. |V(deg)| < H, where H~10 is the number of vertices with a given degree below which are binned) are grouped into bins of degree-width binWidth, max-degree localMax, and number of degrees in bin with at least one vertex binSize (to keep track of sparsity). This helps the authors to generate graphs with a more realistic degree distribution.
The process of generating a dynamic graph is as follows. First the difference between the desired and the current degree distribution is calculated. The authors then create an edge-addition set where each vertex is present as many times as the number of additional incident edges it must recieve. Edges are then created by connecting two vertices randomly from this set, and removing both from the set once connected. Currently, authors reject self-loops and duplicate edges. Removal of edges is done in a similar fashion.
Authors observe that adding edges with power-law properties dominates the execution time, and consider parallelizing DyGraph as part of future work.
My notes on shared memory parallelism.
Shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory [REF].
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
**Community detection methods** can be *global* or *local*. **Global community detection methods** divide the entire graph into groups. Existing global algorithms include:
- Random walk methods
- Spectral partitioning
- Label propagation
- Greedy agglomerative and divisive algorithms
- Clique percolation
https://gist.github.com/wolfram77/b4316609265b5b9f88027bbc491f80b6
There is a growing body of work in *detecting overlapping communities*. **Seed set expansion** is a **local community detection method** where a relevant *seed vertices* of interest are picked and *expanded to form communities* surrounding them. The quality of each community is measured using a *fitness function*.
**Modularity** is a *fitness function* which compares the number of intra-community edges to the expected number in a random-null model. **Conductance** is another popular fitness score that measures the community cut or inter-community edges. Many *overlapping community detection* methods **use a modified ratio** of intra-community edges to all edges with atleast one endpoint in the community.
Andersen et al. use a **Spectral PageRank-Nibble method** which minimizes conductance and is formed by adding vertices in order of decreasing PageRank values. Andersen and Lang develop a **random walk approach** in which some vertices in the seed set may not be placed in the final community. Clauset gives a **greedy method** that *starts from a single vertex* and then iteratively adds neighboring vertices *maximizing the local modularity score*. Riedy et al. **expand multiple vertices** via maximizing modularity.
Several algorithms for **detecting global, overlapping communities** use a *greedy*, *agglomerative approach* and run *multiple separate seed set expansions*. Lancichinetti et al. run **greedy seed set expansions**, each with a *single seed vertex*. Overlapping communities are produced by a sequentially running expansions from a node not yet in a community. Lee et al. use **maximal cliques as seed sets**. Havemann et al. **greedily expand cliques**.
The authors of this paper discuss a dynamic approach for **community detection using seed set expansion**. Simply marking the neighbours of changed vertices is a **naive approach**, and has *severe shortcomings*. This is because *communities can split apart*. The simple updating method *may fail even when it outputs a valid community* in the graph.
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
A **community** (in a network) is a subset of nodes which are _strongly connected among themselves_, but _weakly connected to others_. Neither the number of output communities nor their size distribution is known a priori. Community detection methods can be divisive or agglomerative. **Divisive methods** use _betweeness centrality_ to **identify and remove bridges** between communities. **Agglomerative methods** greedily **merge two communities** that provide maximum gain in _modularity_. Newman and Girvan have introduced the **modularity metric**. The problem of community detection is then reduced to the problem of modularity maximization which is **NP-complete**. **Louvain method** is a variant of the _agglomerative strategy_, in that is a _multi-level heuristic_.
https://gist.github.com/wolfram77/917a1a4a429e89a0f2a1911cea56314d
In this paper, the authors discuss **four heuristics** for Community detection using the _Louvain algorithm_ implemented upon recently developed **Grappolo**, which is a parallel variant of the Louvain algorithm. They are:
- Vertex following and Minimum label
- Data caching
- Graph coloring
- Threshold scaling
With the **Vertex following** heuristic, the _input is preprocessed_ and all single-degree vertices are merged with their corresponding neighbours. This helps reduce the number of vertices considered in each iteration, and also help initial seeds of communities to be formed. With the **Minimum label heuristic**, when a vertex is making the decision to move to a community and multiple communities provided the same modularity gain, the community with the smallest id is chosen. This helps _minimize or prevent community swaps_. With the **Data caching** heuristic, community information is stored in a vector instead of a map, and is reused in each iteration, but with some additional cost. With the **Vertex ordering via Graph coloring** heuristic, _distance-k coloring_ of graphs is performed in order to group vertices into colors. Then, each set of vertices (by color) is processed _concurrently_, and synchronization is performed after that. This enables us to mimic the behaviour of the serial algorithm. Finally, with the **Threshold scaling** heuristic, _successively smaller values of modularity threshold_ are used as the algorithm progresses. This allows the algorithm to converge faster, and it has been observed a good modularity score as well.
From the results, it appears that _graph coloring_ and _threshold scaling_ heuristics do not always provide a speedup and this depends upon the nature of the graph. It would be interesting to compare the heuristics against baseline approaches. Future work can include _distributed memory implementations_, and _community detection on streaming graphs_.
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
This is a short review of Community detection methods (on graphs), and their applications. A **community** is a subset of a network whose members are *highly connected*, but *loosely connected* to others outside their community. Different community detection methods *can return differing communities* these algorithms are **heuristic-based**. **Dynamic community detection** involves tracking the *evolution of community structure* over time.
https://gist.github.com/wolfram77/09e64d6ba3ef080db5558feb2d32fdc0
Communities can be of the following **types**:
- Disjoint
- Overlapping
- Hierarchical
- Local.
The following **static** community detection **methods** exist:
- Spectral-based
- Statistical inference
- Optimization
- Dynamics-based
The following **dynamic** community detection **methods** exist:
- Independent community detection and matching
- Dependent community detection (evolutionary)
- Simultaneous community detection on all snapshots
- Dynamic community detection on temporal networks
**Applications** of community detection include:
- Criminal identification
- Fraud detection
- Criminal activities detection
- Bot detection
- Dynamics of epidemic spreading (dynamic)
- Cancer/tumor detection
- Tissue/organ detection
- Evolution of influence (dynamic)
- Astroturfing
- Customer segmentation
- Recommendation systems
- Social network analysis (both)
- Network summarization
- Privary, group segmentation
- Link prediction (both)
- Community evolution prediction (dynamic, hot field)
<br>
<br>
## References
- [Application Areas of Community Detection: A Review : PAPER](https://ieeexplore.ieee.org/document/8625349)
This paper discusses a GPU implementation of the Louvain community detection algorithm. Louvain algorithm obtains hierachical communities as a dendrogram through modularity optimization. Given an undirected weighted graph, all vertices are first considered to be their own communities. In the first phase, each vertex greedily decides to move to the community of one of its neighbours which gives greatest increase in modularity. If moving to no neighbour's community leads to an increase in modularity, the vertex chooses to stay with its own community. This is done sequentially for all the vertices. If the total change in modularity is more than a certain threshold, this phase is repeated. Once this local moving phase is complete, all vertices have formed their first hierarchy of communities. The next phase is called the aggregation phase, where all the vertices belonging to a community are collapsed into a single super-vertex, such that edges between communities are represented as edges between respective super-vertices (edge weights are combined), and edges within each community are represented as self-loops in respective super-vertices (again, edge weights are combined). Together, the local moving and the aggregation phases constitute a stage. This super-vertex graph is then used as input fof the next stage. This process continues until the increase in modularity is below a certain threshold. As a result from each stage, we have a hierarchy of community memberships for each vertex as a dendrogram.
Approaches to perform the Louvain algorithm can be divided into coarse-grained and fine-grained. Coarse-grained approaches process a set of vertices in parallel, while fine-grained approaches process all vertices in parallel. A coarse-grained hybrid-GPU algorithm using multi GPUs has be implemented by Cheong et al. which grabbed my attention. In addition, their algorithm does not use hashing for the local moving phase, but instead sorts each neighbour list based on the community id of each vertex.
https://gist.github.com/wolfram77/7e72c9b8c18c18ab908ae76262099329
Survey for extra-child-process package : NOTESSubhajit Sahu
Useful additions to inbuilt child_process module.
📦 Node.js, 📜 Files, 📰 Docs.
Please see attached PDF for literature survey.
https://gist.github.com/wolfram77/d936da570d7bf73f95d1513d4368573e
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
This paper presents two algorithms for efficiently computing PageRank on dynamically updating graphs in a batched manner: DynamicLevelwisePR and DynamicMonolithicPR. DynamicLevelwisePR processes vertices level-by-level based on strongly connected components and avoids recomputing converged vertices on the CPU. DynamicMonolithicPR uses a full power iteration approach on the GPU that partitions vertices by in-degree and skips unaffected vertices. Evaluation on real-world graphs shows the batched algorithms provide speedups of up to 4000x over single-edge updates and outperform other state-of-the-art dynamic PageRank algorithms.
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
For the PhD forum an abstract submission is required by 10th May, and poster by 15th May. The event is on 30th May.
https://gist.github.com/wolfram77/1c1f730d20b51e0d2c6d477fd3713024
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
In this paper, the authors describe two approaches for dynamic community detection using the CNM algorithm. CNM is a hierarchical, agglomerative algorithm that greedily maximizes modularity. They define two approaches: BasicDyn and FastDyn. BasicDyn backtracks merges of communities until each marked (changed) vertex is its own singleton community. FastDyn undoes a merge only if the quality of merge, as measured by the induced change in modularity, has significantly decreased compared to when the merge initially took place. FastDyn also allows more than two vertices to contract together if in the previous time step these vertices eventually ended up contracted in the same community. In the static case, merging several vertices together in one contraction phase could lead to deteriorating results. FastDyn is able to do this, however, because it uses information from the merges of the previous time step. Intuitively, merges that previously occurred are more likely to be acceptable later.
https://gist.github.com/wolfram77/1856b108334cc822cdddfdfa7334792a
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
1. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 1/9
DDR3 SDRAM
4 GiB PC3-12800 ECC DDR3 DIMM
Type Synchronous dynamic random-
access memory (SDRAM)
Release
date
2007
Predecessor DDR2 SDRAM (2003)
Successor DDR4 SDRAM (2014)
DDR3 SDRAM
Double Data Rate 3 Synchronous Dynamic Random-Access
Memory, officially abbreviated as DDR3 SDRAM, is a type of
synchronous dynamic random-access memory (SDRAM) with a
high bandwidth ("double data rate") interface, and has been in use
since 2007. It is the higher-speed successor to DDR and DDR2
and predecessor to DDR4 synchronous dynamic random-access
memory (SDRAM) chips. DDR3 SDRAM is neither forward nor
backward compatible with any earlier type of random-access
memory (RAM) because of different signaling voltages, timings,
and other factors.
DDR3 is a DRAM interface specification. The actual DRAM
arrays that store the data are similar to earlier types, with similar
performance. The primary benefit of DDR3 SDRAM over its
immediate predecessor, DDR2 SDRAM, is its ability to transfer
data at twice the rate (eight times the speed of its internal memory
arrays), enabling higher bandwidth or peak data rates.
The DDR3 standard permits DRAM chip capacities of up to 8 gibibits (Gibit), and up to four ranks of 64 bits each for
a total maximum of 16 gibibytes (GiB) per DDR3 DIMM. Because of a hardware limitation not fixed until Ivy
Bridge-E in 2013, most older Intel CPUs only support up to 4-Gibit chips for 8 GiB DIMMs (Intel's Core 2 DDR3
chipsets only support up to 2 Gibit). All AMD CPUs correctly support the full specification for 16 GiB DDR3
DIMMs.[1]
History
Successor
Specification
Overview
Dual-inline memory modules
Latencies
Power consumption
Modules
Serial presence detect
Release 4
XMP extension
Variants
DDR3L and DDR3U extensions
Feature summary
Components
Modules
Technological advantages over DDR2
See also
Notes
References
External links
Contents
2. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 2/9
In February 2005, Samsung introduced the first prototype DDR3 memory chip. Samsung played a major role in the
development and standardisation of DDR3.[2][3] In May 2005, Desi Rhoden, chairman of the JEDEC committee,
stated that DDR3 had been under development for "about 3 years".[4]
DDR3 was officially launched in 2007, but sales were not expected to overtake DDR2 until the end of 2009, or
possibly early 2010, according to Intel strategist Carlos Weissenberg, speaking during the early part of their roll-out
in August 2008.[5] (The same timescale for market penetration had been stated by market intelligence company
DRAMeXchange over a year earlier in April 2007,[6] and by Desi Rhoden in 2005.[4]) The primary driving force
behind the increased usage of DDR3 has been new Core i7 processors from Intel and Phenom II processors from
AMD, both of which have internal memory controllers: the former requires DDR3, the latter recommends it. IDC
stated in January 2009 that DDR3 sales would account for 29% of the total DRAM units sold in 2009, rising to 72%
by 2011.[7]
In September 2012, JEDEC released the final specification of DDR4.[8] The primary benefits of DDR4 compared to
DDR3 include a higher standardized range of clock frequencies and data transfer rates[9] and significantly lower
voltage.
Compared to DDR2 memory, DDR3 memory uses less power. Some manufacturers further propose using "dual-gate"
transistors to reduce leakage of current.[10]
According to JEDEC,[11]:111 1.575 volts should be considered the absolute maximum when memory stability is the
foremost consideration, such as in servers or other mission-critical devices. In addition, JEDEC states that memory
modules must withstand up to 1.80 volts[a] before incurring permanent damage, although they are not required to
function correctly at that level.[11]:109
Another benefit is its prefetch buffer, which is 8-burst-deep. In contrast, the prefetch buffer of DDR2 is 4-burst-deep,
and the prefetch buffer of DDR is 2-burst-deep. This advantage is an enabling technology in DDR3's transfer speed.
DDR3 modules can transfer data at a rate of 800–2133 MT/s using both rising and falling edges of a 400–1066 MHz
I/O clock. This is twice DDR2's data transfer rates (400–1066 MT/s using a 200–533 MHz I/O clock) and four times
the rate of DDR (200–400 MT/s using a 100–200 MHz I/O clock). High-performance graphics was an initial driver of
such bandwidth requirements, where high bandwidth data transfer between framebuffers is required.
Because the hertz is a measure of cycles per second, and no signal cycles more often than every other transfer,
describing the transfer rate in units of MHz is technically incorrect, although very common. It is also misleading
because various memory timings are given in units of clock cycles, which are half the speed of data transfers.
DDR3 does use the same electric signaling standard as DDR and DDR2, Stub Series Terminated Logic, albeit at
different timings and voltages. Specifically, DDR3 uses SSTL_15.[13]
In February 2005, Samsung demonstrated the first DDR3 memory prototype, with a capacity of 512 Mb and a
bandwidth of 1.066 Gbps.[2] Products in the form of motherboards appeared on the market in June 2007[14] based on
Intel's P35 "Bearlake" chipset with DIMMs at bandwidths up to DDR3-1600 (PC3-12800).[15] The Intel Core i7,
released in November 2008, connects directly to memory rather than via a chipset. The Core i7, i5 & i3 CPUs
History
Successor
Specification
Overview
3. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 3/9
Physical comparison of DDR, DDR2, and DDR3 SDRAM
Desktop PCs (DIMM)
Notebook and convertible PCs
(SO-DIMM)
initially supported only DDR3. AMD's socket AM3
Phenom II X4 processors, released in February 2009,
were their first to support DDR3 (while still supporting
DDR2 for backwards compatibility).
DDR3 dual-inline memory modules (DIMMs) have 240
pins and are electrically incompatible with DDR2. A key
notch—located differently in DDR2 and DDR3 DIMMs
—prevents accidentally interchanging them. Not only are
they keyed differently, but DDR2 has rounded notches on
the side and the DDR3 modules have square notches on
the side.[16] DDR3 SO-DIMMs have 204 pins.[17]
For the Skylake microarchitecture, Intel has also designed
a SO-DIMM package named UniDIMM, which can use
either DDR3 or DDR4 chips. The CPU's integrated
memory controller can then work with either. The purpose
of UniDIMMs is to handle the transition from DDR3 to
DDR4, where pricing and availability may make it
desirable to switch RAM type. UniDIMMs have the same
dimensions and number of pins as regular DDR4 SO-
DIMMs, but the notch is placed differently to avoid
accidentally using in an incompatible DDR4 SO-DIMM
socket.[18]
DDR3 latencies are numerically higher because the I/O
bus clock cycles by which they are measured are shorter;
the actual time interval is similar to DDR2 latencies,
around 10 ns. There is some improvement because DDR3
generally uses more recent manufacturing processes, but
this is not directly caused by the change to DDR3.
CAS latency (ns) = 1000 × CL (cycles) ÷ clock frequency (MHz) = 2000 × CL (cycles) ÷ transfer rate (MT/s)
While the typical latencies for a JEDEC DDR2-800 device were 5-5-5-15 (12.5 ns), some standard latencies for
JEDEC DDR3 devices include 7-7-7-20 for DDR3-1066 (13.125 ns) and 8-8-8-24 for DDR3-1333 (12 ns).
As with earlier memory generations, faster DDR3 memory became available after the release of the initial versions.
DDR3-2000 memory with 9-9-9-28 latency (9 ns) was available in time to coincide with the Intel Core i7 release in
late 2008,[19] while later developments made DDR3-2400 widely available (with CL 9–12 cycles = 7.5–10 ns), and
speeds up to DDR3-3200 available (with CL 13 cycles = 8.125 ns).
Power consumption of individual SDRAM chips (or, by extension, DIMMs) varies based on many factors, including
speed, type of usage, voltage, etc. Dell's Power Advisor calculates that 4 GB ECC DDR1333 RDIMMs use about
4 W each.[20] By contrast, a more modern mainstream desktop-oriented part 8 GB, DDR3/1600 DIMM, is rated at
2.58 W, despite being significantly faster.[21]
Dual-inline memory modules
Latencies
Power consumption
Modules
4. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 4/9
Name Chip Bus Timings
Standard Type Module
Clock
rate
(MHz)
Cycle
time
(ns)[22]
Clock
rate
(MHz)
Transfer
rate
(MT/s)
Bandwidth
(MB/s)
CL-
TRCD-
TRP
CAS
latency
(ns)
DDR3-800
D PC3-
6400
100 10 400 800 6400
5-5-5 12.5
E 6-6-6 15
DDR3-1066
E
PC3-
8500
133⅓ 7.5 533⅓ 1066.67 8533⅓
6-6-6 11.25
F 7-7-7 13.125
G 8-8-8 15
DDR3-1333
F*
PC3-
10600
166⅔ 6 666⅔ 1333⅓ 10666.67
7-7-7 10.5
G 8-8-8 12
H 9-9-9 13.5
J*
10-10-
10
15
DDR3-1600
G*
PC3-
12800
200 5 800 1600 12800
8-8-8 10
H 9-9-9 11.25
J
10-10-
10
12.5
K
11-11-
11
13.75
DDR3-1866
DDR3-
1866J*
DDR3-
1866K
DDR3-
1866L
DDR3-
1866M*
PC3-
14900
233⅓ 4.286 933⅓ 1866⅔ 14933⅓
10-10-
10
11-11-
11
12-12-
12
13-13-
13
10.56
11.786
12.857
13.929
DDR3-2133
DDR3-
2133K*
DDR3-
2133L
DDR3-
2133M
DDR3-
2133N*
PC3-
17000
266⅔ 3.75 1066⅔ 2133⅓ 17066⅔
11-11-
11
12-12-
12
13-13-
13
14-14-
14
10.313
11.25
12.188
13.125
* optional
DDR3-xxx denotes data transfer rate, and describes DDR chips, whereas PC3-xxxx denotes theoretical bandwidth
(with the last two digits truncated), and is used to describe assembled DIMMs. Bandwidth is calculated by taking
transfers per second and multiplying by eight. This is because DDR3 memory modules transfer data on a bus that is
64 data bits wide, and since a byte comprises 8 bits, this equates to 8 bytes of data per transfer.
With two transfers per cycle of a quadrupled clock signal, a 64-bit wide DDR3 module may achieve a transfer rate of
up to 64 times the memory clock speed. With data being transferred 64 bits at a time per memory module, DDR3
SDRAM gives a transfer rate of (memory clock rate) × 4 (for bus clock multiplier) × 2 (for data rate) × 64 (number of
bits transferred) / 8 (number of bits in a byte). Thus with a memory clock frequency of 100 MHz, DDR3 SDRAM
gives a maximum transfer rate of 6400 MB/s.
The data rate (in MT/s) is twice the I/O bus clock (in MHz) due to the double data rate of DDR memory. As
explained above, the bandwidth in MB/s is the data rate multiplied by eight.
CL – CAS Latency clock cycles, between sending a column address to the memory and the beginning of the data in
response
5. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 5/9
tRCD – Clock cycles between row activate and reads/writes
tRP – Clock cycles between row precharge and activate
Fractional frequencies are normally rounded down, but rounding up to 667 is common because of the exact number
being 666⅔ and rounding to the nearest whole number. Some manufacturers also round to a certain precision or
round up instead. For example, PC3-10666 memory could be listed as PC3-10600 or PC3-10700.[23]
Note: All items listed above are specified by JEDEC as JESD79-3F.[11]:157–165All RAM data rates in-between or
above these listed specifications are not standardized by JEDEC—often they are simply manufacturer optimizations
using higher-tolerance or overvolted chips. Of these non-standard specifications, the highest reported speed reached
was equivalent to DDR3-2544, as of May 2010.[24]
Alternative naming: DDR3 modules are often incorrectly labeled with the prefix PC (instead of PC3), for marketing
reasons, followed by the data-rate. Under this convention PC3-10600 is listed as PC1333.[25]
DDR3 memory utilizes serial presence detect.[26] Serial presence detect (SPD) is a standardized way to automatically
access information about a computer memory module, using a serial interface. It is typically used during the power-
on self-test for automatic configuration of memory modules.
Release 4 of the DDR3 Serial Presence Detect (SPD) document (SPD4_01_02_11) adds support for Load Reduction
DIMMs and also for 16b-SO-DIMMs and 32b-SO-DIMMs.
JEDEC Solid State Technology Association announced the publication of Release 4 of the DDR3 Serial Presence
Detect (SPD) document on September 1, 2011.[27]
Intel Corporation officially introduced the eXtreme Memory Profile (XMP) Specification on March 23, 2007 to
enable enthusiast performance extensions to the traditional JEDEC SPD specifications for DDR3 SDRAM.[28]
In addition to bandwidth designations (e.g. DDR3-800D), and capacity variants, modules can be one of the following:
1. ECC memory, which has an extra data byte lane used for correcting minor errors and detecting major
errors for better reliability. Modules with ECC are identified by an additional ECC or E in their
designation. For example: "PC3-6400 ECC", or PC3-8500E.[29]
2. Registered or buffered memory, which improves signal integrity (and hence potentially clock rates and
physical slot capacity) by electrically buffering the signals with a register, at a cost of an extra clock of
increased latency. Those modules are identified by an additional R in their designation, for example
PC3-6400R.[30]
3. Non-registered (a.k.a. "unbuffered") RAM may be identified by an additional U in the designation.[30]
4. Fully buffered modules, which are designated by F or FB and do not have the same notch position as
other classes. Fully buffered modules cannot be used with motherboards that are made for registered
modules, and the different notch position physically prevents their insertion.
5. Load reduced modules, which are designated by LR and are similar to registered/buffered memory, in
a way that LRDIMM modules buffer both control and data lines while retaining the parallel nature of all
signals. As such, LRDIMM memory provides large overall maximum memory capacities, while
addressing some of the performance and power consumption issues of FB memory induced by the
required conversion between serial and parallel signal forms.
Serial presence detect
Release 4
XMP extension
Variants
6. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 6/9
Both FBDIMM (fully buffered) and LRDIMM (load reduced) memory types are designed primarily to control the
amount of electric current flowing to and from the memory chips at any given time. They are not compatible with
registered/buffered memory, and motherboards that require them usually will not accept any other kind of memory.
The DDR3L (DDR3 Low Voltage) standard is an addendum to the JESD79-3 DDR3 Memory Device Standard
specifying low voltage devices.[31] The DDR3L standard is 1.35 V and has the label PC3L for its modules. Examples
include DDR3L‐800 (PC3L-6400), DDR3L‐1066 (PC3L-8500), DDR3L‐1333 (PC3L-10600), and DDR3L‐1600
(PC3L-12800). Memory specified to DDR3L and DDR3U specifications is compatible with the original DDR3
standard, and can run at either the lower voltage or at 1.50 V.[32] However, devices that require DDR3L explicitly,
which operate at 1.35 V, such as systems using mobile versions of fourth-generation Intel Core processors, are not
compatible with 1.50 V DDR3 memory.[33]
The DDR3U (DDR3 Ultra Low Voltage) standard is 1.25 V and has the label PC3U for its modules.[34]
JEDEC Solid State Technology Association announced the publication of JEDEC DDR3L on July 26, 2010[35] and
the DDR3U on October 2011.[36]
Introduction of asynchronous RESET pin
Support of system-level flight-time compensation
On-DIMM mirror-friendly DRAM pinout
Introduction of CWL (CAS write latency) per clock bin
On-die I/O calibration engine
READ and WRITE calibration
Dynamic ODT (On-Die-Termination) feature allows different termination values for Reads and Writes
Fly-by command/address/control bus with on-DIMM termination
High-precision calibration resistors
Are not backwards compatible—DDR3 modules do not fit into DDR2 sockets; forcing them can
damage the DIMM and/or the motherboard[37]
Higher bandwidth performance, up to 2133 MT/s standardized
Slightly improved latencies, as measured in nanoseconds
Higher performance at low power (longer battery life in laptops)
Enhanced low-power features
List of device bandwidths
Low power DDR3 SDRAM (LPDDR3)
Multi-channel memory architecture
DDR3L and DDR3U extensions
Feature summary
Components
Modules
Technological advantages over DDR2
See also
7. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 7/9
a. Prior to revision F, the standard stated that 1.975 V was the absolute maximum DC rating.[12]
1. Cutress, Ian (2014-02-11). "I'M Intelligent Memory to release 16GB Unregistered DDR3 Modules" (htt
p://www.anandtech.com/show/7742/im-intelligent-memory-to-release-16gb-unregistered-ddr3-
modules). anandtech.com. Retrieved 2015-04-20.
2. "Samsung Demonstrates World's First DDR 3 Memory Prototype" (https://phys.org/news/2005-02-sam
sung-world-ddr-memory-prototype.html). Phys.org. 17 February 2005. Retrieved 23 June 2019.
3. "Our Proud Heritage from 2000 to 2009" (https://www.samsung.com/semiconductor/about-us/history-0
3/). Samsung Semiconductor. Samsung. Retrieved 25 June 2019.
4. Sobolev, Vyacheslav (2005-05-31). "JEDEC: Memory standards on the way" (https://web.archive.org/w
eb/20130413141103/http://www.digitimes.com/news/a20050530PR201.html). DigiTimes.com. Archived
from the original (http://www.digitimes.com/news/a20050530PR201.html) on April 13, 2013. Retrieved
2011-04-28. "JEDEC is already well along in the development of the DDR3 standard, and we have
been working on it for about three years now.... Following historical models, you could reasonably
expect the same three-year transition to a new technology that you have seen for the last several
generations of standard memory"
5. "IDF: "DDR3 won't catch up with DDR2 during 2009" " (https://web.archive.org/web/20090402160556/h
ttp://www.pcpro.co.uk/news/220257/idf-ddr3-wont-catch-up-with-ddr2-during-2009.html). pcpro.co.uk.
19 August 2008. Archived from the original (http://www.pcpro.co.uk/news/220257/idf-ddr3-wont-catch-u
p-with-ddr2-during-2009.html) on 2009-04-02. Retrieved 2009-06-17.
6. Bryan, Gardiner (April 17, 2007). "DDR3 Memory Won't Be Mainstream Until 2009" (http://www.extrem
etech.com/article2/0,2845,2115031,00.asp). ExtremeTech.com. Retrieved 2009-06-17.
7. Salisbury, Andy (2009-01-20). "New 50nm Process Will Make DDR3 Faster and Cheaper This Year" (ht
tp://www.maximumpc.com/article/news/new_50nm_process_will_make_ddr3_faster_and_cheaper_this
_year). MaximumPC.com. Retrieved 2009-06-17.
8. "JEDEC Announces Publication of DDR4 Standard – JEDEC" (http://www.jedec.org/news/pressrelease
s/jedec-announces-publication-ddr4-standard). JEDEC. Retrieved 12 October 2014.
9. Shilov, Anton (August 16, 2010). "Next-Generation DDR4 Memory to Reach 4.266GHz – Report" (http
s://web.archive.org/web/20101219085440/http://www.xbitlabs.com/news/memory/display/20100816124
343_Next_Generation_DDR4_Memory_to_Reach_4_266GHz_Report.html). XbitLabs.com. Archived
from the original (http://www.xbitlabs.com/news/memory/display/20100816124343_Next_Generation_D
DR4_Memory_to_Reach_4_266GHz_Report.html) on December 19, 2010. Retrieved 2011-01-03.
10. McCloskey, Alan, Research: DDR FAQ (https://web.archive.org/web/20071112151558/http://www.ocmo
dshop.com/ocmodshop.aspx?a=868), archived from the original (http://www.ocmodshop.com/ocmodsh
op.aspx?a=868) on 2007-11-12, retrieved 2007-10-18
11. "DDR3 SDRAM standard (revision F)" (http://www.jedec.org/standards-documents/docs/jesd-79-3d).
JEDEC. July 2012. Retrieved 2015-07-05.
12. "DDR3 SDRAM standard (revision E)" (http://www.jedec.org/sites/default/files/docs/JESD79-3E.pdf)
(PDF). JEDEC. July 2010. Retrieved 2015-07-05.
13. Chang, Jaci (2004). "Design Considerations for the DDR3 Memory Sub-System" (https://web.archive.or
g/web/20120724110727/http://www.jedex.org/images/pdf/samsung%20-%20jaci_chang.pdf) (PDF).
Jedex. p. 4. Archived from the original (http://www.jedex.org/images/pdf/samsung%20-%20jaci_chang.
pdf) (PDF) on 2012-07-24. Retrieved 2020-08-12.
14. Soderstrom, Thomas (2007-06-05). "Pipe Dreams: Six P35-DDR3 Motherboards Compared" (http://ww
w.tomshardware.com/2007/06/05/pipe_dreams_six_p35-ddr3_motherboards_compared/). Tom's
Hardware.
15. Fink, Wesley (2007-07-20). "Super Talent & TEAM: DDR3-1600 Is Here!" (http://www.anandtech.com/p
rintarticle.aspx?i=3045). AnandTech.
16. "DocMemory" (2007-02-21). "Memory Module Picture 2007" (http://www.simmtester.com/page/news/sh
owpubnews.asp?title=Memory+Module+Picture+2007&num=150).
Notes
References
8. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 8/9
17. "204-Pin DDR3 SDRAM unbuffered SODIMM design specification" (https://www.jedec.org/standards-d
ocuments/docs/module-42018). JEDEC. May 2014. Retrieved 2015-07-05.
18. "How Intel Plans to Transition Between DDR3 and DDR4 for the Mainstream" (http://www.techpowerup.
com/205231/how-intel-plans-to-transition-between-ddr3-and-ddr4-for-the-mainstream.html).
techpowerup.com. Retrieved 19 March 2018.
19. Shilov, Anton (2008-10-29). "Kingston Rolls Out Industry's First 2GHz Memory Modules for Intel Core
i7 Platforms" (https://web.archive.org/web/20081101113745/http://www.xbitlabs.com/news/memory/disp
lay/20081029141143_Kingston_Rolls_Out_Industry_s_First_2GHz_Memory_Modules_for_Intel_Core_i
7_Platforms.html). Xbit Laboratories. Archived from the original (http://www.xbitlabs.com/news/memor
y/display/20081029141143_Kingston_Rolls_Out_Industry_s_First_2GHz_Memory_Modules_for_Intel_
Core_i7_Platforms.html) on 2008-11-01. Retrieved 2008-11-02.
20. "Dell Energy Smart Solution Advisor" (https://web.archive.org/web/20130801070649/http://essa.us.dell.
com/DellStarOnline/DCCP.aspx?c=us&l=en&s=corp&Template=6945c07e-3be7-47aa-b318-18f9052df
893). Essa.us.dell.com. Archived from the original (http://essa.us.dell.com/DellStarOnline/DCCP.aspx?
c=us&l=en&s=corp&Template=6945c07e-3be7-47aa-b318-18f9052df893) on 2013-08-01. Retrieved
2013-07-28.
21. http://www.kingston.com/dataSheets/KVR16N11_8.pdf
22. Cycle time is the inverse of the I/O bus clock frequency; e.g., 1/(100 MHz) = 10 ns per clock cycle.
23. Pc3 10600 vs. pc3 10666 What's the difference – New-System-Build (http://www.tomshardware.com/fo
rum/274587-31-10600-10666-what-difference#t2045244), Tomshardware.com, retrieved 2012-01-23
24. Kingston's 2,544 MHz DDR3 On Show at Computex (http://news.softpedia.com/news/Kingston-s-2-544
-MHz-DDR3-On-Show-at-Computex-143379.shtml), News.softpedia.com, 2010-05-31, retrieved
2012-01-23
25. Crucial Value CT2KIT51264BA1339 PC1333 4GB Memory RAM (DDR3, CL9) Retail (https://www.ama
zon.co.uk/Crucial-CT2KIT51264BA1339-PC1333-Memory-Retail/dp/B0036VO632),
www.amazon.co.uk, 2016-05-10, retrieved 2016-05-10
26. "Understanding DDR3 Serial Presence Detect (SPD) Table" (https://www.simmtester.com/News/Public
ationArticle/153). simmtester.com. Retrieved 12 December 2015.
27. "JEDEC Announces Publication of Release 4 of the DDR3 Serial Presence Detect Specification" (htt
p://www.jedec.org/news/pressreleases/jedec-announces-publication-release-4-ddr3-serial-presence-de
tect-specification).
28. "Intel Extreme memory Profile (Intel XMP) DDR3 Technology" (http://www.intel.com/assets/pdf/whitepa
per/319124.pdf) (PDF). Retrieved 2009-05-29.
29. Memory technology evolution: an overview of system memory technologies (https://web.archive.org/we
b/20110724013507/http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00256987/c0025698
7.pdf) (PDF), Hewlett-Packard, p. 18, archived from the original (http://h20000.www2.hp.com/bc/docs/s
upport/SupportManual/c00256987/c00256987.pdf) (PDF) on 2011-07-24
30. "What is LR-DIMM, LRDIMM Memory? (Load-Reduce DIMM)" (http://www.simmtester.com/page/news/
showpubnews.asp?num=167). simmtester.com. Retrieved 2014-08-29.
31. "Addendum No. 1 to JESD79-3 - 1.35 V DDR3L-800, DDR3L-1066, DDR3L-1333, DDR3L-1600, and
DDR3L-1866" (https://www.jedec.org/standards-documents/docs/jesd79-3-1a01). May 2013. Retrieved
2019-09-08.
32. "Addendum No. 1 to JESD79-3 - 1.35 V DDR3L-800, DDR3L-1066, DDR3L-1333, DDR3L-1600, and
DDR3L-1866" (https://www.jedec.org/standards-documents/docs/jesd79-3-1a01). May 2013. Retrieved
2019-09-08. "DDR3L VDD/VDDQ requirements - Power Supply: DDR3L operation = 1.283 V to 1.45 V;
DDR3 operation = 1.425 V to 1.575 V .. Once initialized for DDR3L operation, DDR3 operation may
only be used if the device is in reset while VDD and VDDQ are changed for DDR3 operation"
33. "What is DDR3L Memory?" (http://www.dell.com/support/article/us/en/19/SLN153768/EN). Dell.com.
Dell. 2016-10-03. Retrieved 2016-10-04.
34. "Addendum No. 2 to JESD79-3, 1.25 V DDR3U-800, DDR3U-1066, DDR3U-1333, and DDR3U-1600"
(https://www.jedec.org/standards-documents/docs/jesd79-3-2). October 2011. Retrieved 2019-09-08.
35. "Specification Will Encourage Lower Power Consumption for Countless Consumer Electronics,
Networking and Computer Products" (http://www.jedec.org/news/pressreleases/jedec-publishes-widely-
anticipated-ddr3l-low-voltage-memory-standard).
36. "Addendum No. 2 to JESD79-3, 1.25 V DDR3U-800, DDR3U-1066, DDR3U-1333, and DDR3U-1600"
(https://www.jedec.org/category/keywords/ddr3u).
9. 02/10/2020 DDR3 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR3_SDRAM 9/9
JEDEC standard No. 79-3 (JESD79-3: DDR3 SDRAM)
DDR3 SDRAM standard JESD79-3F (https://www.jedec.org/standards-documents/docs/jesd-79-3d)
Addendum No. 1 to JESD79-3 - 1.35 V DDR3L-800, DDR3L-1066, DDR3L-1333, DDR3L-1600,
and DDR3L-1866 (JESD79-3-1A.01) (https://www.jedec.org/standards-documents/docs/jesd79-3-1
a01)
Addendum No. 2 to JESD79-3 - 1.25 V DDR3U-800, DDR3U-1066, DDR3U-1333, and DDR3U-
1600 (https://www.jedec.org/standards-documents/docs/jesd79-3-2)
Addendum No. 3 to JESD79-3 - 3D Stacked SDRAM (https://www.jedec.org/standards-documents/
docs/jesd79-3-3-0)
SPD (Serial Presence Detect), from JEDEC standard No. 21-C (JESD21C: JEDEC configurations for
solid state memories)
SPD Annex K - Serial Presence Detect (SPD) for DDR3 SDRAM Modules (SPD4_01_02_11) (htt
p://www.jedec.org/standards-documents/docs/spd-4010211)
DDR, DDR2, DDR3 memory slots testing (http://start-test.com/Products/JTAGExternalModules.php)
DDR3 Synchronous DRAM Memory (https://courses.cs.washington.edu/courses/cse467/11wi/lectures/
SDRAM.pdf)
Retrieved from "https://en.wikipedia.org/w/index.php?title=DDR3_SDRAM&oldid=980499996"
This page was last edited on 26 September 2020, at 21:26 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you
agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-
profit organization.
37. "DDR3: Frequently Asked Questions" (https://web.archive.org/web/20091229190303/http://www.kingst
on.com/channelmarketingcenter/hyperx/literature/MKF_1223_DDR3_FAQ.pdf) (PDF). Archived from
the original (http://www.kingston.com/channelmarketingcenter/hyperx/literature/MKF_1223_DDR3_FA
Q.pdf) (PDF) on 2009-12-29. Retrieved 2009-08-18.
External links