Show simple item record

Heap Data Allocation to Scratch-Pad Memory in Embedded Systems

dc.contributor.advisorBarua, Rajeev Ken_US
dc.contributor.authorDominguez, Angelen_US
dc.date.accessioned2007-06-22T05:32:19Z
dc.date.available2007-06-22T05:32:19Z
dc.date.issued2007-04-05
dc.identifier.urihttp://hdl.handle.net/1903/6721
dc.description.abstractThis thesis presents the first-ever compile-time method for allocating a portion of a program's dynamic data to scratch-pad memory. A scratch-pad is a fast directly addressed compiler-managed SRAM memory that replaces the hardware-managed cache. It is motivated by its better real-time guarantees vs cache and by its significantly lower overheads in access time, energy consumption, area and overall runtime. Dynamic data refers to all objects allocated at run-time in a program, as opposed to static data objects which are allocated at compile-time. Existing compiler methods for allocating data to scratch-pad are able to place only code, global and stack data (static data) in scratch-pad memory; heap and recursive-function objects(dynamic data) are allocated entirely in DRAM, resulting in poor performance for these dynamic data types. Runtime methods based on software caching can place data in scratch-pad, but because of their high overheads from software address translation, they have not been successful, especially for dynamic data. In this thesis we present a dynamic yet compiler-directed allocation method for dynamic data that for the first time, (i) is able to place a portion of the dynamic data in scratch-pad; (ii) has no software-caching tags; (iii) requires no run-time per-access extra address translation; and (iv) is able to move dynamic data back and forth between scratch-pad and DRAM to better track the program's locality characteristics. With our method, code, global, stack and heap variables can share the same scratch-pad. When compared to placing all dynamic data variables in DRAM and only static data in scratch-pad, our results show that our method reduces the average runtime of our benchmarks by 22.3%, and the average power consumption by 26.7%, for the same size of scratch-pad fixed at 5% of total data size. Significant savings in runtime and energy across a large number of benchmarks were also observed when compared against cache memory organizations, showing our method's success under constrained SRAM sizes when dealing with dynamic data. Lastly, our method is able to minimize the profile dependence issues which plague all similar allocation methods through careful analysis of static and dynamic profile information.en_US
dc.format.extent3187434 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.titleHeap Data Allocation to Scratch-Pad Memory in Embedded Systemsen_US
dc.typeDissertationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentElectrical Engineeringen_US
dc.subject.pqcontrolledEngineering, Electronics and Electricalen_US
dc.subject.pqcontrolledComputer Scienceen_US
dc.subject.pqcontrolledEngineering, Electronics and Electricalen_US
dc.subject.pquncontrolledHeap Dataen_US
dc.subject.pquncontrolledScratch-Pad Memoryen_US
dc.subject.pquncontrolledDynamic allocationen_US
dc.subject.pquncontrolledEmbedded Systemsen_US
dc.subject.pquncontrolledRecursive Allocationen_US
dc.subject.pquncontrolledMemory Allocationen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record