Kamis, 06 Oktober 2011

ABOUT UNDERSTANDING INTERLEAVING, SUPERSCALAR, HYPER-THREADING, PIPELINE, USB OTG (On The Go), dan Expansion BUS

Interleaving

"Interleaver" redirects here. For the fiber-optic device, see optical interleaver. For interleaved email replies, see posting style.In computer science and telecommunication, interleaving is a way to arrange data in a non-contiguous way to increase performance.
It is typically used:
  • In error-correction coding, particularly within data transmission, disk storage, and computer memory.
  • For multiplexing of several input data over shared media. In telecommunication, it is implemented through dynamic bandwidth allocation mechanisms, where it may particularly be used to resolve quality of service and latency issues. In streaming media applications, it enables quasi-simultaneous reception of input streams, such as video and audio.
  • For improved access performance in computer memory and computer data storage. Examples include non-contiguous storage patterns in disk storage, interleaved memory, and page coloring memory allocation strategies.
Interleaving is also used for multidimensional data structures, see Z-order (curve).

Superscalar

A superscalar CPU architecture implements a form of parallelism called instruction level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate. A superscalar processor executes more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier.
In the Flynn Taxonomy, a superscalar processor is classified as a MIMD processor (Multiple Instructions, Multiple Data).
While a superscalar CPU is typically also pipelined, pipelining and superscalar architecture are considered different performance enhancement techniques.
The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU core):
  • Instructions are issued from a sequential instruction stream
  • CPU hardware dynamically checks for data dependencies between instructions at run time (versus software checking at compile time)
  • The CPU accepts multiple instructions per clock cycle

Hyper-Threading

Intel Hyper-Threading Technology merupakan sebuah teknologi mikroprosesor yang diciptakan oleh Intel Corporation pada beberapa prosesor dengan arsitektur Intel NetBurst dan Core, semacam Intel Pentium 4, Pentium D, Xeon, dan Core 2. Teknologi ini diperkenalkan pada bulan Maret 2002 dan mulanya hanya diperkenalkan pada prosesor Xeon (Prestonia).
Prosesor dengan teknologi ini akan dilihat oleh sistem operasi yang mendukung banyak prosesor seperti Windows NT, Windows 2000, Windows XP Professional, Windows Vista, dan GNU/Linux sebagai dua buah prosesor, meski secara fisik hanya tersedia satu prosesor. Dengan dua buah prosesor dikenali oleh sistem operasi, maka kerja sistem dalam melakukan eksekusi setiap thread pun akan lebih efisien, karena meskipun sistem-sistem operasi tersebut bersifat multitasking, sistem-sistem operasi tersebut melakukan eksekusi terhadap proses secara sekuensial (berurutan), dengan sebuah algoritma antrean yang disebut dengan dispatching algorithm.

pipeline

An instruction pipeline is a technique used in the design of computers and other digital electronic devices to increase their instruction throughput (the number of instructions that can be executed in a unit of time).
The fundamental idea is to split the processing of a computer instruction into a series of independent steps, with storage at the end of each step. This allows the computer's control circuitry to issue instructions at the processing rate of the slowest step, which is much faster than the time needed to perform all steps at once. The term pipeline refers to the fact that each step is carrying data at once (like water), and each step is connected to the next (like the links of a pipe.)
The origin of pipelining is thought to be either the ILLIAC II project or the IBM Stretch project though a simple version was used earlier in the Z1 in 1939 and the Z3 in 1941.
The IBM Stretch Project proposed the terms, "Fetch, Decode, and Execute" that became common usage.
Most modern CPUs are driven by a clock. The CPU consists internally of logic and registers (flip flops). When the clock signal arrives, the flip flops take their new value and the logic then requires a period of time to decode the new values. Then the next clock pulse arrives and the flip flops again take their new values, and so on. By breaking the logic into smaller pieces and inserting flip flops between the pieces of logic, the delay before the logic gives valid outputs is reduced. In this way the clock period can be reduced. For example, the classic RISC pipeline is broken into five stages with a set of flip flops between each stage.
  1. Instruction fetch
  2. Instruction decode and register fetch
  3. Execute
  4. Memory access
  5. Register write back
When a programmer (or compiler) writes assembly code, they make the assumption that each instruction is executed before execution of the subsequent instruction is begun. This assumption is invalidated by pipelining. When this causes a program to behave incorrectly, the situation is known as a hazard. Various techniques for resolving hazards such as forwarding and stalling exist.
A non-pipeline architecture is inefficient because some CPU components (modules) are idle while another module is active during the instruction cycle. Pipelining does not completely cancel out idle time in a CPU but making those modules work in parallel improves program execution significantly.
Processors with pipelining are organized inside into stages which can semi-independently work on separate jobs. Each stage is organized and linked into a 'chain' so each stage's output is fed to another stage until the job is done. This organization of the processor allows overall processing time to be significantly reduced.
A deeper pipeline means that there are more stages in the pipeline, and therefore, fewer logic gates in each stage. This generally means that the processor's frequency can be increased as the cycle time is lowered. This happens because there are fewer components in each stage of the pipeline, so the propagation delay is decreased for the overall stage.
Unfortunately, not all instructions are independent. In a simple pipeline, completing an instruction may require 5 stages. To operate at full performance, this pipeline will need to run 4 subsequent independent instructions while the first is completing. If 4 instructions that do not depend on the output of the first instruction are not available, the pipeline control logic must insert a stall or wasted clock cycle into the pipeline until the dependency is resolved. Fortunately, techniques such as forwarding can significantly reduce the cases where stalling is required. While pipelining can in theory increase performance over an unpipelined core by a factor of the number of stages (assuming the clock frequency also scales with the number of stages), in reality, most code does not allow for ideal execution.

USB On-The-Go and Embedded Host
Virtually every portable device now uses USB for PC connectivity. As these products increase in popularity, there is a growing need for them to communicate both with USB peripherals and directly with each other when a PC is not available. There is also an increase in the number of other, non-PC hosts (Embedded Hosts) which support USB in order to connect to USB peripherals.
The USB On-The-Go and Embedded Host Supplements addresses these scenarios by allowing portable devices and non-PC hosts to have the following enhancements:
  • Targeted host capability to communicate with selected other USB peripherals
  • Support for direct connections between OTG devices
  • Power saving features to preserve battery life
Revision 2.0 of the USB On-The-Go and Embedded Host Supplement to the USB 2.0 Specification applies to products operating at low-speed, full-speed and high-speed and is released, including applicable ECNs and errata, as part of the USB 2.0 Specification package. The corresponding OTG Adopters Agreement is also available.
Revision 1.0 of the USB On-The-Go and Embedded Host Supplement to the USB 3.0 Specification enhances these scenarios by adding SuperSpeed capability to USB On-The-Go and is released as part of the USB 3.0 Specification package. The corresponding Adopters Agreement for USB OTG 3.0 is the USB 3.0 Adopters Agreement.
Implementers should note that if they include battery charging capability in their devices or support for host adapters such as docks or ACAs they should also reference the Battery Charging Specification.
Compliance testing for products conforming to Revision 2.0 of the USB On-The-Go and Embedded Host Supplement to the USB 2.0 Specification is available now. Compliance testing for SuperSpeed USB OTG products is currently under development. Manufacturers wishing to test their products for compliance should complete the compliance checklist and pass the tests as defined in the USB OTG and Embedded Host compliance plan. An automated tester, the Protocol and Electrical Tester (PET) is required in order to complete this testing.
In addition to passing USB-IF compliance testing and inclusion of its USB On-The-Go products on the Integrators List, companies wishing to use the certified USB logos must have a current USB-IF Trademark License Agreement on file.

 

Expansion Slot

Alternatively referred to as an expansion port, an expansion slot is a slot located inside a computer on the motherboard or riser board that allows additional boards to be connected to it. Below is a listing of some of the expansion slots commonly found in IBM compatible computers as well as other brands of computers and a graphic illustration of a motherboard and its expansion slots.
  • PCI
Short for Peripheral Component Interconnect, PCI was introduced by Intel in 1992, revised in 1993 to version 2.0, and later revised in 1995 to PCI 2.1 and is as an expansion to the ISA bus. The PCI bus is a 32-bit computer bus that is also available as a 64-bit bus and was the most commonly found and used computer bus in computers during the late 1990's and early 2000's. Below is a graphic illustration of the PCI slot on a motherboard.
PCI slot
Examples of PCI devices
  • Modem
  • Network card
  • Sound card
  • Video card
  • AGP
Short for Accelerated Graphics Port, AGP is an advanced port designed for Video cards and 3D accelerators. Designed by Intel and introduced in August of 1997 AGP introduces a dedicated point-to-point channel so that the graphics controller can directly access the system memory. Below is an illustration of what the AGP slot may look like on your motherboard.
AGP slot
The AGP channel is 32-bits wide and runs at 66 MHz. This translates into a total bandwidth of 266 MBps, which is much greater than the PCI bandwidth of up to 133 MBps. AGP also supports two optional faster modes, with throughput of 533 MBps and 1.07 GBps. It also allows 3-D textures to be stored in main memory rather than video memory.
Each computer with AGP support will either have one AGP slot or on-board AGP video. If you needed more than one video card in the computer, you can have one AGP video card and one PCI video card or use a motherboard that supports SLI.
Not all operating systems support AGP because of limited or no driver support. For example, Windows 95 did not incorporate AGP support. See the Windows versions page for information about Windows versions that support AGP.
  • Determining the AGP version.
  • Additional information with installing computer hardware.
  • Video card help and support.
Also see: AGP Aperture, Bus, Motherboard definitions, Video definitions
  • PCI Express

Originally known as 3rd Generation I/O (3GIO), PCI Express, or PCIe, was approved as a standard on July 2002 and is a computer bus found in computers. PCI Express is designed to replace PCI and AGP and is available in several different formats: x1, x2, x4, x8, x12, x16 and x32. Below are some graphic illustrations of what the PCI Express would look like on the motherboard.
PCI Express x1 slot
PCI Express x16 slot 
source :




Tidak ada komentar:

Posting Komentar