Browsing by Author "Li, Li"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Crustal Anisotropy Beneath the Trans-North China Orogen and its Adjacent Areas From Receiver Functions(Frontiers Media S.A., 2021) Xu, Xiaoming; Ding, Zhifeng; Li, Li; Niu, FenglinAs an important segment of the North China Craton, the Trans-North China Orogen (TNCO) has experienced strong tectonic deformation and magmatic activities since the Cenozoic and is characterized by significant seismicity. To understand the mechanism of the crustal deformation and seismic hazards, we determined the crustal thickness (H), Vp/Vs ratio (κ) and crustal anisotropy (the fast polarization direction φ and splitting time τ) beneath the TNCO and its adjacent areas by analyzing receiver function data recorded by a dense seismic array. The (H, κ) and (φ, τ) at a total of 309 stations were measured, respectively. The Moho depth varies from ∼30 km beneath the western margin of the Bohai bay basin to the maximum value of ∼48 km beneath the northern Lüliang Mountain, which shows the positive and negative correlations with the elevation and the Bouguer anomaly. The average φ is roughly parallel to the strikes of the faults, grabens and Mountains in this study area, whereas a rotating distribution is shown around the Datong-Hannuoba volcanic regions. Based on the φ measured from the Moho Ps and SKS/SKKS phases, we propose that the crustal deformation and seismic hazards beneath the TNCO could be due to the counterclockwise rotation of the Ordos block driven by the far-field effects of the India-Eurasian collision.Item PME: pruning-based multi-size embedding for recommender systems(Frontiers Media S.A., 2023) Liu, Zirui; Song, Qingquan; Li, Li; Choi, Soo-Hyun; Chen, Rui; Hu, XiaEmbedding is widely used in recommendation models to learn feature representations. However, the traditional embedding technique that assigns a fixed size to all categorical features may be suboptimal due to the following reasons. In recommendation domain, the majority of categorical features' embeddings can be trained with less capacity without impacting model performance, thereby storing embeddings with equal length may incur unnecessary memory usage. Existing work that tries to allocate customized sizes for each feature usually either simply scales the embedding size with feature's popularity or formulates this size allocation problem as an architecture selection problem. Unfortunately, most of these methods either have large performance drop or incur significant extra time cost for searching proper embedding sizes. In this article, instead of formulating the size allocation problem as an architecture selection problem, we approach the problem from a pruning perspective and propose Pruning-based Multi-size Embedding (PME) framework. During the search phase, we prune the dimensions that have the least impact on model performance in the embedding to reduce its capacity. Then, we show that the customized size of each token can be obtained by transferring the capacity of its pruned embedding with significant less search cost. Experimental results validate that PME can efficiently find proper sizes and hence achieve strong performance while significantly reducing the number of parameters in the embedding layer.