유승주(Sungjoo Yoo)
포항공과대학교
Smartphone data are stored in the storage in two ways, as a normal file or SQLite database file. Recent studies report SQLite-induced storage traffics often dominate disk accesses. Our investigation shows that SQLite writes are characterized by frequent overwrites of small data. We show a small non-volatile write buffer is effective in coalescing such overwrites to reduce storage writes. We also present how to make the best use of write buffer resource the size of which is limited by the requirement of small form factor. We present two new methods, shadow tag and SQLite-aware buffer management both of which aim at identifying hot storage data to keep in the write buffer. We did experiments with real mobile applications running on a smartphone and a Flash memory-based storage system and obtained average 56.2% reduction in storage writes.
Big data processing, e.g., graph computation, is characterized by massive parallelism in computation and a large amount of fine-grained random memory accesses often with structural localities due to graph-like data dependency. Recently, GPU is gaining more and more attention for servers due to its capability of parallel computation. However, the current GPU architecture is not well suited to big data workloads due to the limited capability of handling a large number of memory requests. We present a special function unit, called memory fast-forward (MFF) unit, to address this problem. Our proposed MFF unit provides two key functions. First, it supports pointer chasing which enables computation threads to issue as many memory requests as possible to increase the potential of coalescing memory requests. Second, it coalesces memory requests bound for the same cache block, often due to structural locality, thereby reducing memory traffics. Our experiments with graph computation algorithms and real graphs show that the proposed MFF unit can improve the energy efficiency of GPU in graph computation by average 54.6% at a negligible area cost.
Deep neural network (DNN) is considered as a promising candidate in mobile visual recognition as well as big data analytics. The characteristics of memory in DNN is (1) the required memory size is mostly determined by synaptic weights, and (2) off-chip memory bandwidth for synaptic weights often determines DNN performance in learning and inference. We introduce our basic approach on memory-conscious DNN covering hardware architectures (GPU and dedicated neural processing unit) and learning algorithms. We conclude this talk with a long-term prospect that DNN can evolve towards brain-inspired computing and brain modeling.
Sungjoo Yoo received Ph.D. from Seoul National University in 2000. He worked as researcher at TIMA laboratory, Grenoble France from 2000 to 2004. He was principal engineer at Samsung System LSI from 2004 to 2008. He joined POSTECH in 2008 and is now associate professor. His main research interests include memory-related issues in mobile devices and servers. He received Best Paper Award at International SoC Conference (ISOCC) in 2006 and Best Paper Award nominations at Design Automation Conference (DAC) in 2011 and Design Automation and Test in Europe (DATE) in 2002, 2009 and 2015.