content
stringlengths 611
179k
| label
stringclasses 10
values | category
stringclasses 10
values | dataset
stringclasses 1
value | node_id
int64 0
11.7k
| split
stringclasses 4
values |
|---|---|---|---|---|---|
1-hop neighbor's text information: superscalar_processor.superscalar processor superscalar processor cpu implements form parallelism called instruction-level parallelism within single processor contrast scalar processor execute one single instruction per clock cycle superscalar processor execute one instruction clock cycle simultaneously dispatching multiple instructions different execution units processor therefore allows throughput number instructions executed unit time would otherwise possible given clock rate execution unit separate processor core processor multi-core processor execution resource within single cpu arithmetic logic unit flynn taxonomy single-core superscalar processor classified sisd processor single instruction stream single data stream though single-core superscalar processor supports short vector operations could classified simd single instruction stream multiple data streams multi-core superscalar processor classified mimd processor multiple instruction streams multiple data streams superscalar cpu typically also pipelined superscalar pipelining execution considered different performance enhancement techniques former executes multiple instructions parallel using multiple execution units whereas latter executes multiple instructions execution unit parallel dividing execution unit different phases superscalar technique traditionally associated several identifying characteristics within given cpu seymour cray cdc 6600 1966 often mentioned first superscalar design 1967 ibm system/360 model 91 another superscalar mainframe motorola mc88100 1988 intel i960ca 1989 amd 29000-series 29050 1990 microprocessors first commercial single-chip superscalar microprocessors risc microprocessors like first superscalar execution risc architectures free transistors die area used include multiple execution units risc designs faster cisc designs 1980s 1990s except cpus used low-power applications embedded systems battery-powered devices essentially general-purpose cpus developed since 1998 superscalar p5 pentium first superscalar x86 processor nx586 p6 pentium pro amd k5 among first designs decode x86-instructions asynchronously dynamic microcode-like micro-op sequences prior actual execution superscalar microarchitecture opened dynamic scheduling buffered partial instructions enabled parallelism extracted compared rigid methods used simpler p5 pentium also simplified speculative execution allowed higher clock frequencies compared designs advanced cyrix 6x86 simplest processors scalar processors instruction executed scalar processor typically manipulates one two data items time contrast instruction executed vector processor operates simultaneously many data items analogy difference scalar vector arithmetic superscalar processor mixture two instruction processes one data item multiple execution units within cpu thus multiple instructions processing separate data items concurrently superscalar cpu design emphasizes improving instruction dispatcher accuracy allowing keep multiple execution units use times become increasingly important number units increased early superscalar cpus would two alus single fpu later design powerpc 970 includes four alus two fpus two simd units dispatcher ineffective keeping units fed instructions performance system better simpler cheaper design superscalar processor usually sustains execution rate excess one instruction per machine cycle merely processing multiple instructions concurrently make architecture superscalar since pipelined multiprocessor multi-core architectures also achieve different methods superscalar cpu dispatcher reads instructions memory decides ones run parallel dispatching one several execution units contained inside single cpu therefore superscalar processor envisioned multiple parallel pipelines processing instructions simultaneously single instruction thread available performance improvement superscalar techniques limited three key areas existing binary executable programs varying degrees intrinsic parallelism cases instructions dependent executed simultaneously cases inter-dependent one instruction impacts either resources results instructions codice_1 run parallel none results depend calculations however instructions codice_2 might runnable parallel depending order instructions complete move units number simultaneously issued instructions increases cost dependency checking increases extremely rapidly exacerbated need check dependencies run time cpu clock rate cost includes additional logic gates required implement checks time delays gates research shows gate cost cases may formula_1 gates delay cost formula_2 formula_3 number instructions processor instruction set formula_4 number simultaneously dispatched instructions even though instruction stream may contain inter-instruction dependencies superscalar cpu must nonetheless check possibility since assurance otherwise failure detect dependency would produce incorrect results matter advanced semiconductor process fast switching speed places practical limit many instructions simultaneously dispatched process advances allow ever greater numbers execution units e.g. alus burden checking instruction dependencies grows rapidly complexity register renaming circuitry mitigate dependencies collectively power consumption complexity gate delay costs limit achievable superscalar speedup roughly eight simultaneously dispatched instructions however even given infinitely fast dependency checking logic otherwise conventional superscalar cpu instruction stream many dependencies would also limit possible speedup thus degree intrinsic parallelism code stream forms second limitation collectively limits drive investigation alternative architectural changes long instruction word vliw explicitly parallel instruction computing epic simultaneous multithreading smt multi-core computing vliw burdensome task dependency checking hardware logic run time removed delegated compiler explicitly parallel instruction computing epic like vliw extra cache prefetching instructions simultaneous multithreading smt technique improving overall efficiency superscalar processors smt permits multiple independent threads execution better utilize resources provided modern processor architectures superscalar processors differ multi-core processors several execution units entire processors single processor composed finer-grained execution units alu integer multiplier integer shifter fpu etc may multiple versions execution unit enable execution many instructions parallel differs multi-core processor concurrently processes instructions multiple threads one thread per processing unit called core also differs pipelined processor multiple instructions concurrently various stages execution assembly-line fashion various alternative techniques mutually exclusive—they frequently combined single processor thus multicore cpu possible core independent processor containing multiple parallel pipelines pipeline superscalar processors also include vector capability
1-hop neighbor's text information: arithmetic_logic_unit.arithmetic logic unit arithmetic logic unit alu combinational digital electronic circuit performs arithmetic bitwise operations integer binary numbers contrast floating-point unit fpu operates floating point numbers alu fundamental building block many types computing circuits including central processing unit cpu computers fpus graphics processing units gpus single cpu fpu gpu may contain multiple alus inputs alu data operated called operands code indicating operation performed alu output result performed operation many designs alu also status inputs outputs convey information previous operation current operation respectively alu external status registers alu variety input output nets electrical conductors used convey digital signals alu external circuitry alu operating external circuits apply signals alu inputs response alu produces conveys signals external circuitry via outputs basic alu three parallel data buses consisting two input operands b result output data bus group signals conveys one binary integer number typically b bus widths number signals comprising bus identical match native word size external circuitry e.g. encapsulating cpu processor opcode input parallel bus conveys alu operation selection code enumerated value specifies desired arithmetic logic operation performed alu opcode size bus width determines maximum number different operations alu perform example four-bit opcode specify sixteen different alu operations generally alu opcode machine language opcode though cases may directly encoded bit field within machine language opcode status outputs various individual signals convey supplemental information result current alu operation general-purpose alus commonly status signals end alu operation status output signals usually stored external registers make available future alu operations e.g. implement multiple-precision arithmetic controlling conditional branching collection bit registers store status outputs often treated single multi-bit register referred status register condition code register status inputs allow additional information made available alu performing operation typically single carry-in bit stored carry-out previous alu operation alu combinational logic circuit meaning outputs change asynchronously response input changes normal operation stable signals applied alu inputs enough time known propagation delay passed signals propagate alu circuitry result alu operation appears alu outputs external circuitry connected alu responsible ensuring stability alu input signals throughout operation allowing sufficient time signals propagate alu sampling alu result general external circuitry controls alu applying signals inputs typically external circuitry employs sequential logic control alu operation paced clock signal sufficiently low frequency ensure enough time alu outputs settle worst-case conditions example cpu begins alu addition operation routing operands sources usually registers alu operand inputs control unit simultaneously applies value alu opcode input configuring perform addition time cpu also routes alu result output destination register receive sum alu input signals held stable next clock allowed propagate alu destination register cpu waits next clock next clock arrives destination register stores alu result since alu operation completed alu inputs may set next alu operation number basic arithmetic bitwise logic functions commonly supported alus basic general purpose alus typically include operations repertoires alu shift operations cause operand b shift left right depending opcode shifted operand appears simple alus typically shift operand one bit position whereas complex alus employ barrel shifters allow shift operand arbitrary number bits one operation single-bit shift operations bit shifted operand appears carry-out value bit shifted operand depends type shift integer arithmetic computations multiple-precision arithmetic algorithm operates integers larger alu word size algorithm treats operand ordered collection alu-size fragments arranged most-significant ms least-significant ls vice versa example case 8-bit alu 24-bit integer codice_1 would treated collection three 8-bit fragments codice_2 ms codice_3 codice_4 ls since size fragment exactly matches alu word size alu directly operate piece operand algorithm uses alu directly operate particular operand fragments thus generate corresponding fragment partial multi-precision result partial generated written associated region storage designated multiple-precision result process repeated operand fragments generate complete collection partials result multiple-precision operation arithmetic operations e.g. addition subtraction algorithm starts invoking alu operation operands ls fragments thereby producing ls partial carry bit algorithm writes partial designated storage whereas processor state machine typically stores carry bit alu status register algorithm advances next fragment operand collection invokes alu operation fragments along stored carry bit previous alu operation thus producing another significant partial carry bit carry bit stored status register partial written designated storage process repeats operand fragments processed resulting complete collection partials storage comprise multi-precision arithmetic result multiple-precision shift operations order operand fragment processing depends shift direction left-shift operations fragments processed ls first ls bit partial—which conveyed via stored carry bit—must obtained ms bit previously left-shifted less-significant operand conversely operands processed ms first right-shift operations ms bit partial must obtained ls bit previously right-shifted more-significant operand bitwise logical operations e.g. logical logical operand fragments may processed arbitrary order partial depends corresponding operand fragments stored carry bit previous alu operation ignored although alu designed perform complex functions resulting higher circuit complexity cost power consumption larger size makes impractical many cases consequently alus often limited simple functions executed high speeds i.e. short propagation delays external processor circuitry responsible performing complex functions orchestrating sequence simpler alu operations example computing square root number might implemented various ways depending alu complexity implementations transition fastest expensive slowest least costly square root calculated cases processors simple alus take longer perform calculation multiple alu operations must performed alu usually implemented either stand-alone integrated circuit ic 74181 part complex ic latter case alu typically instantiated synthesizing description written vhdl verilog hardware description language example following vhdl code describes simple 8-bit alu mathematician john von neumann proposed alu concept 1945 report foundations new computer called edvac cost size power consumption electronic circuitry relatively high throughout infancy information age consequently serial computers many early computers pdp-8 simple alu operated one data bit time although often presented wider word size programmers one earliest computers multiple discrete single-bit alu circuits 1948 whirlwind employed sixteen math units enable operate 16-bit words 1967 fairchild introduced first alu implemented integrated circuit fairchild 3800 consisting eight-bit alu accumulator integrated-circuit alus soon emerged including four-bit alus am2901 74181 devices typically bit slice capable meaning carry look ahead signals facilitated use multiple interconnected alu chips create alu wider word size devices quickly became popular widely used bit-slice minicomputers microprocessors began appear early 1970s even though transistors become smaller often insufficient die space full-word-width alu result early microprocessors employed narrow alu required multiple cycles per machine language instruction examples includes popular zilog z80 performed eight-bit additions four-bit alu time transistor geometries shrank following moore law became feasible build wider alus microprocessors modern integrated circuit ic transistors orders magnitude smaller early microprocessors making possible fit highly complex alus ics today many modern alus wide word widths architectural enhancements barrel shifters binary multipliers allow perform single clock cycle operations would required multiple operations earlier alus
1-hop neighbor's text information: computer_data_storage.computer data storage digvijay singh computer data storage often called storage memory technology consisting computer components recording media used retain digital data core function fundamental component computers central processing unit cpu computer manipulates data performing computations practice almost computers use storage hierarchy puts fast expensive small storage options close cpu slower larger cheaper options farther away generally fast volatile technologies lose data power referred memory slower persistent technologies referred storage even first computer designs charles babbage analytical engine percy ludgate analytical machine clearly distinguished processing memory babbage stored numbers rotations gears ludgate stored numbers displacements rods shuttles distinction extended von neumann architecture cpu consists two main parts control unit arithmetic logic unit alu former controls flow data cpu memory latter performs arithmetic logical operations data without significant amount memory computer would merely able perform fixed operations immediately output result would reconfigured change behavior acceptable devices desk calculators digital signal processors specialized devices von neumann machines differ memory store operating instructions data computers versatile need hardware reconfigured new program simply reprogrammed new in-memory instructions also tend simpler design relatively simple processor may keep state successive computations build complex procedural results modern computers von neumann machines modern digital computer represents data using binary numeral system text numbers pictures audio nearly form information converted string bits binary digits value 1 0 common unit storage byte equal 8 bits piece information handled computer device whose storage space large enough accommodate binary representation piece information simply data example complete works shakespeare 1250 pages print stored five megabytes 40 million bits one byte per character data encoded assigning bit pattern character digit multimedia object many standards exist encoding e.g. character encodings like ascii image encodings like jpeg video encodings like mpeg-4 adding bits encoded unit redundancy allows computer detect errors coded data correct based mathematical algorithms errors generally occur low probabilities due random bit value flipping physical bit fatigue loss physical bit storage ability maintain distinguishable value 0 1 due errors inter intra-computer communication random bit flip e.g. due random radiation typically corrected upon detection bit group malfunctioning physical bits always specific defective bit known group definition depends specific storage device typically automatically fenced-out taken use device replaced another functioning equivalent group device corrected bit values restored possible cyclic redundancy check crc method typically used communications storage error detection detected error retried data compression methods allow many cases database represent string bits shorter bit string compress reconstruct original string decompress needed utilizes substantially less storage tens percents many types data cost computation compress decompress needed analysis trade-off storage cost saving costs related computations possible delays data availability done deciding whether keep certain data compressed security reasons certain types data e.g. credit-card information may kept encrypted storage prevent possibility unauthorized information reconstruction chunks storage snapshots generally lower storage hierarchy lesser bandwidth greater access latency cpu traditional division storage primary secondary tertiary off-line storage also guided cost per bit contemporary usage memory usually semiconductor storage read-write random-access memory typically dram dynamic ram forms fast temporary storage storage consists storage devices media directly accessible cpu secondary tertiary storage typically hard disk drives optical disc drives devices slower ram non-volatile retaining contents powered historically memory called core memory main memory real storage internal memory meanwhile non-volatile storage devices referred secondary storage external memory auxiliary/peripheral storage primary storage also known main memory internal memory prime memory often referred simply memory one directly accessible cpu cpu continuously reads instructions stored executes required data actively operated also stored uniform manner historically early computers used delay lines williams tubes rotating magnetic drums primary storage 1954 unreliable methods mostly replaced magnetic core memory core memory remained dominant 1970s advances integrated circuit technology allowed semiconductor memory become economically competitive led modern random-access memory ram small-sized light quite expensive time particular types ram used primary storage also volatile i.e lose information powered shown diagram traditionally two sub-layers primary storage besides main large-capacity ram main memory directly indirectly connected central processing unit via memory bus actually two buses diagram address bus data bus cpu firstly sends number address bus number called memory address indicates desired location data reads writes data memory cells using data bus additionally memory management unit mmu small device cpu ram recalculating actual memory address example provide abstraction virtual memory tasks ram types used primary storage volatile uninitialized start computer containing storage would source read instructions order start computer hence non-volatile primary storage containing small startup program bios used bootstrap computer read larger program non-volatile secondary storage ram start execute non-volatile technology used purpose called rom read-only memory terminology may somewhat confusing rom types also capable random access many types rom literally read updates possible however slow memory must erased large portions re-written embedded systems run programs directly rom similar programs rarely changed standard computers store non-rudimentary programs rom rather use large capacities secondary storage non-volatile well costly recently primary storage secondary storage uses refer historically called respectively secondary storage tertiary storage secondary storage also known external memory auxiliary storage differs primary storage directly accessible cpu computer usually uses input/output channels access secondary storage transfer desired data primary storage secondary storage non-volatile retaining data power shut modern computer systems typically two orders magnitude secondary storage primary storage secondary storage less expensive modern computers hard disk drives hdds solid-state drives ssds usually used secondary storage access time per byte hdds ssds typically measured milliseconds one thousandth seconds access time per byte primary storage measured nanoseconds one billionth seconds thus secondary storage significantly slower primary storage rotating optical storage devices cd dvd drives even longer access times examples secondary storage technologies include usb flash drives floppy disks magnetic tape paper tape punched cards ram disks disk read/write head hdds reaches proper placement data subsequent data track fast access reduce seek time rotational latency data transferred disks large contiguous blocks sequential block access disks orders magnitude faster random access many sophisticated paradigms developed design efficient algorithms based upon sequential block access another way reduce i/o bottleneck use multiple disks parallel order increase bandwidth primary secondary memory secondary storage often formatted according file system format provides abstraction necessary organize data files directories also providing metadata describing owner certain file access time access permissions information computer operating systems use concept virtual memory allowing utilization primary storage capacity physically available system primary memory fills system moves least-used chunks pages swap file page file secondary storage retrieving later needed lot pages moved slower secondary storage system performance degraded tertiary storage tertiary memory level secondary storage typically involves robotic mechanism mount insert dismount removable mass storage media storage device according system demands data often copied secondary storage use primarily used archiving rarely accessed information since much slower secondary storage e.g 5–60 seconds vs. 1–10 milliseconds primarily useful extraordinarily large data stores accessed without human operators typical examples include tape libraries optical jukeboxes computer needs read information tertiary storage first consult catalog database determine tape disc contains information next computer instruct robotic arm fetch medium place drive computer finished reading information robotic arm return medium place library tertiary storage also known nearline storage near online formal distinction online nearline offline storage example always-on spinning hard disk drives online storage spinning drives spin automatically massive arrays idle disks maid nearline storage removable media tape cartridges automatically loaded tape libraries nearline storage tape cartridges must manually loaded offline storage off-line storage computer data storage medium device control processing unit medium recorded usually secondary tertiary storage device physically removed disconnected must inserted connected human operator computer access unlike tertiary storage accessed without human interaction off-line storage used transfer information since detached medium easily physically transported additionally case disaster example fire destroys original data medium remote location probably unaffected enabling disaster recovery off-line storage increases general information security since physically inaccessible computer data confidentiality integrity affected computer-based attack techniques also information stored archival purposes rarely accessed off-line storage less expensive tertiary storage modern personal computers secondary tertiary storage media also used off-line storage optical discs flash memory devices popular much lesser extent removable hard disk drives enterprise uses magnetic tape predominant older examples floppy disks zip disks punched cards storage technologies levels storage hierarchy differentiated evaluating certain core characteristics well measuring characteristics specific particular implementation core characteristics volatility mutability accessibility addressability particular implementation storage technology characteristics worth measuring capacity performance non-volatile memory retains stored information even constantly supplied electric power suitable long-term storage information volatile memory requires constant power maintain stored information fastest memory technologies volatile ones although universal rule since primary storage required fast predominantly uses volatile memory dynamic random-access memory form volatile memory also requires stored information periodically reread rewritten refreshed otherwise would vanish static random-access memory form volatile memory similar dram exception never needs refreshed long power applied loses content power supply lost uninterruptible power supply ups used give computer brief window time move information primary volatile storage non-volatile storage batteries exhausted systems example emc symmetrix integrated batteries maintain volatile storage several minutes utilities hdparm sar used measure io performance linux full disk encryption volume virtual disk encryption andor file/folder encryption readily available storage devices hardware memory encryption available intel architecture supporting total memory encryption tme page granular memory encryption multiple keys mktme sparc m7 generation since october 2015.. commonly used data storage media semiconductor magnetic optical paper still sees limited usage fundamental storage technologies all-flash arrays afas proposed development semiconductor memory uses semiconductor-based integrated circuits store information semiconductor memory chip may contain millions tiny transistors capacitors volatile non-volatile forms semiconductor memory exist modern computers primary storage almost exclusively consists dynamic volatile semiconductor memory dynamic random-access memory since turn century type non-volatile semiconductor memory known flash memory steadily gained share off-line storage home computers non-volatile semiconductor memory also used secondary storage various advanced electronic devices specialized computers designed early 2006 notebook desktop computer manufacturers started using flash-based solid-state drives ssds default configuration options secondary storage either addition instead traditional hdd magnetic storage uses different patterns magnetization magnetically coated surface store information magnetic storage non-volatile information accessed using one read/write heads may contain one recording transducers read/write head covers part surface head medium must moved relative another order access data modern computers magnetic storage take forms early computers magnetic storage also used optical storage typical optical disc stores information deformities surface circular disc reads information illuminating surface laser diode observing reflection optical disc storage non-volatile deformities may permanent read media formed write media reversible recordable read/write media following forms currently common use magneto-optical disc storage optical disc storage magnetic state ferromagnetic surface stores information information read optically written combining magnetic optical methods magneto-optical disc storage non-volatile sequential access slow write fast read storage used tertiary off-line storage 3d optical data storage also proposed light induced magnetization melting magnetic photoconductors also proposed high-speed low-energy consumption magneto-optical storage paper data storage typically form paper tape punched cards long used store information automatic processing particularly general-purpose computers existed information recorded punching holes paper cardboard medium read mechanically later optically determine whether particular location medium solid contained hole technologies allow people make marks paper easily read machine—these widely used tabulating votes grading standardized tests barcodes made possible object sold transported computer readable information securely attached group bits malfunction may resolved error detection correction mechanisms see storage device malfunction requires different solutions following solutions commonly used valid storage devices device mirroring typical raid designed handle single device failure raid group devices however second failure occurs raid group completely repaired first failure data lost probability single failure typically small thus probability two failures raid group time proximity much smaller approximately probability squared i.e. multiplied database tolerate even smaller probability data loss raid group replicated mirrored many cases mirroring done geographically remotely different storage array handle also recovery disasters see disaster recovery secondary tertiary storage may connect computer utilizing computer networks concept pertain primary storage shared multiple processors lesser degree large quantities individual magnetic tapes optical magneto-optical discs may stored robotic tertiary storage devices tape storage field known tape libraries optical storage field optical jukeboxes optical disk libraries per analogy smallest forms either technology containing one drive device referred autoloaders autochangers robotic-access storage devices may number slots holding individual media usually one picking robots traverse slots load media built-in drives arrangement slots picking devices affects performance important characteristics storage possible expansion options adding slots modules drives robots tape libraries may 10 100,000 slots provide terabytes petabytes near-line information optical jukeboxes somewhat smaller solutions 1,000 slots robotic storage used backups high-capacity archives imaging medical video industries hierarchical storage management known archiving strategy automatically migrating long-unused files fast hard disk storage libraries jukeboxes files needed retrieved back disk
Target text information: ultrasparc_iii.ultrasparc iii ultrasparc iii code-named cheetah microprocessor implements sparc v9 instruction set architecture isa developed sun microsystems fabricated texas instruments introduced 2001 operates 600 900 mhz succeeded ultrasparc iv 2004 gary lauterbach chief architect presented '97 microprocessor forum probable introduction date ultrasparc iii 1999 would competed digital equipment corporation alpha 21264 intel itanium merced case delayed 2001 despite late awarded analysts choice award best server/workstation processor 2001 microprocessor report multiprocessing features ultrasparc iii in-order superscalar microprocessor ultrasparc iii designed shared memory multiprocessing performance several features aid achieving goal integrated memory controller dedicated multiprocessing bus fetches four instructions per cycle instruction cache decoded instructions sent dispatch unit six time dispatch unit issues instructions appropriate execution units depending operand resource availability execution resources consisted two arithmetic logic units alus load store unit two floating-point units one alus execute simple integer instructions loads two floating point units also equal one execute simple instructions adds executes multiplies divides square roots ultrasparc iii split primary instruction data caches instruction cache capacity 32 kb data cache capacity 64 kb four-way set-associative 32-byte cache line external l2 cache maximum capacity 8 mb accessed via dedicated 256-bit bus operating 200 mhz peak bandwidth 6.4 gb/s cache built synchronous static random access memory clocked frequencies 200 mhz l2 cache tags located on-die enable clocked microprocessor clock frequency increases bandwidth accessing cache tags enabling ultrasparc scale higher clock frequencies easily part increased bandwidth cache tags used cache coherency traffic required multiprocessor systems ultrasparc iii designed used maximum capacity l2 cache 8 mb l2 cache tags 90 kb size external interface consists 128-bit data bus 43-bit address bus operating 150 mhz data bus used access memory memory microprocessors shared i/o devices ultrasparc integrated memory controller implements dedicated 128-bit bus operating 150 mhz access 4 gb local memory integrated memory controller used reduce latency thus improve performance unlike ultrasparc microprocessors use feature reduce cost ultrasparc iii consisted 16 million transistors 75 contained caches tags initially fabricated texas instruments c07a process complementary metal–oxide–semiconductor cmos process 0.18 µm feature size six-levels aluminium interconnect 2001 fabricated 0.13 µm process aluminium interconnects enabled operate 750 900 mhz die packaged using controlled collapse chip connection method first sun microprocessor unlike microprocessors bonded way majority solder bumps placed peripheral ring instead distributed across die packaged 1,200-pad land grid array lga package ultrasparc iii cu code-named cheetah+ development original ultrasparc iii operated higher clock frequencies 1002 1200 mhz die size 232 mm fabricated 0.13 µm 7-layer copper metallization cmos process texas instruments packaged 1,368-pad ceramic lga package ultrasparc iiii code named jalapeno derivative ultrasparc iii workstations low-end one four processor servers introduced 2003 operates 1064 1593 mhz on-die l2 cache integrated memory controller capable four-way multiprocessing glue-less system bus optimized function contains 87.5 million transistors 178.5 mm die fabricated texas instruments 0.13 µm seven-layer metal copper cmos process low-k dielectric ultrasparc iiii unified 1 mb l2 cache operates half microprocessor clock frequency six-cycle latency two-cycle throughput load use latency 15 cycles tag store protected parity data ecc every 64-byte cache line 36 ecc bits enabling correction one-bit errors detection error within four bits cache four-way set-associative 64-byte line size physically indexed tagged uses 2.76 µm sram cell consists 63 million transistors on-die memory controller supports 256 mb 16 gb 133 mhz ddr-i sdram memory accessed via 137-bit memory bus 128 bits data 9 ecc memory bus peak bandwidth 4.2 gb/s microprocessor designed support four-way multiprocessing jbus used connect four microprocessors 128-bit address data multiplexed bus operates one half one third microprocessor clock frequency ultrasparc iiii+ code-named serrano development ultrasparc iiii scheduled introduction second half 2005 cancelled year favor ultrasparc iv+ ultrasparc t1 ultrasparc t2 cancellation known 31 august 2006 improvements higher clock frequencies range 2ghz larger 4mb on-die l2 cache support ddr-333 sdram new 90nm process ultrasparc iii family processors succeeded ultrasparc iv series ultrasparc iv combined two ultrasparc iii cores onto single piece silicon offered increased clock rates cpu packaging nearly identical offering difference single pin simplifying board manufacturing system design systems used ultrasparc iii processors could accept ultrasparc iv cpu board upgrades
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'hyperlink'. The 10 categories are:
0: Computational Linguistics
1: Databases
2: Operating Systems
3: Computer Architecture
4: Computer Security
5: Internet Protocols
6: Computer File Systems
7: Distributed Computing Architecture
8: Web Technology
9: Programming Language Topics
Question: Based on the content of the target and neighbors' Wikipedia pages, predict the category ID (0 to 9) for the target node.
|
3
|
Computer Architecture
|
wikics
| 7,270
|
none
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.