Transformer models face significant computational challenges in private inference(PI).Existing optimization methods often rely on isolated techniques,neglecting joint structural and operational improvements.We propose...Transformer models face significant computational challenges in private inference(PI).Existing optimization methods often rely on isolated techniques,neglecting joint structural and operational improvements.We propose IG-3D,a unified framework that integrates structured compression and operator approximation through accurate importance assessment.Our approach first evaluates attention head importance using Integrated Gradients(IG),offering greater stability and theoretical soundness than gradient-based methods.We then apply a threedimensional optimization:(1)structurally pruning redundant attention heads;(2)replacing Softmax with adaptive polynomial approximation to avoid exponential computations;(3)implementing layer-wise GELU substitution to accommodate different layer characteristics.A joint thresholdmechanism coordinates compression across dimensions under accuracy constraints.Experimental results on the GLUE benchmark show that our method achieves an average 2.9×speedup in inference latency and a 50%reduction in communication cost,while controlling the accuracy loss within 2.3%,demonstrating significant synergistic effects and a superior accuracy-efficiency trade-off compared to single-technique optimization strategies.展开更多
文摘Transformer models face significant computational challenges in private inference(PI).Existing optimization methods often rely on isolated techniques,neglecting joint structural and operational improvements.We propose IG-3D,a unified framework that integrates structured compression and operator approximation through accurate importance assessment.Our approach first evaluates attention head importance using Integrated Gradients(IG),offering greater stability and theoretical soundness than gradient-based methods.We then apply a threedimensional optimization:(1)structurally pruning redundant attention heads;(2)replacing Softmax with adaptive polynomial approximation to avoid exponential computations;(3)implementing layer-wise GELU substitution to accommodate different layer characteristics.A joint thresholdmechanism coordinates compression across dimensions under accuracy constraints.Experimental results on the GLUE benchmark show that our method achieves an average 2.9×speedup in inference latency and a 50%reduction in communication cost,while controlling the accuracy loss within 2.3%,demonstrating significant synergistic effects and a superior accuracy-efficiency trade-off compared to single-technique optimization strategies.