跳转到主要内容
SEO Title

category

神经网络修剪和相关资源的精选列表。 灵感来自 awesome-deep-vision、awesome-adversarial-machine-learning、awesome-deep-learning-papers 和 Awesome-NAS。

Table of Contents

Type of Pruning

TypeFWOther

ExplanationFilter pruningWeight pruningother types

2021

TitleVenueTypeCode

A Probabilistic Approach to Neural Network PruningICMLF-

Accelerate CNNs from Three Dimensions: A Comprehensive Pruning FrameworkICMLF-

Group Fisher Pruning for Practical Network CompressionICMLFPyTorch(Author)

On the Predictability of Pruning Across ScalesICMLW-

Towards Compact CNNs via Collaborative CompressionCVPRFPyTorch(Author)

Content-Aware GAN CompressionCVPRFPyTorch(Author)

Permute, Quantize, and Fine-tune: Efficient Compression of Neural NetworksCVPRFPyTorch(Author)

NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile AccelerationCVPRF-

Network Pruning via Performance MaximizationCVPRF-

Convolutional Neural Network Pruning with Structural Redundancy ReductionCVPRF-

Manifold Regularized Dynamic Network PruningCVPRF-

Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic DistillationCVPRFO-

A Gradient Flow Framework For Analyzing Network PruningICLRFPyTorch(Author)

Neural Pruning via Growing RegularizationICLRFPyTorch(Author)

ChipNet: Budget-Aware Pruning with Heaviside Continuous ApproximationsICLRFPyTorch(Author)

Network Pruning That Matters: A Case Study on Retraining VariantsICLRFPyTorch(Author)

Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted NetworkICLRWPyTorch(Author)

Layer-adaptive Sparsity for the Magnitude-based PruningICLRWPyTorch(Author)

Pruning Neural Networks at Initialization: Why Are We Missing the Mark?ICLRW-

Robust Pruning at InitializationICLRW-

2020

TitleVenueTypeCode

HYDRA: Pruning Adversarially Robust Neural NetworksNeurIPSWPyTorch(Author)

Logarithmic Pruning is All You NeedNeurIPSW-

Directional Pruning of Deep Neural NetworksNeurIPSW-

Movement Pruning: Adaptive Sparsity by Fine-TuningNeurIPSWPyTorch(Author)

Sanity-Checking Pruning Methods: Random Tickets can Win the JackpotNeurIPSWPyTorch(Author)

Neuron Merging: Compensating for Pruned NeuronsNeurIPSFPyTorch(Author)

Neuron-level Structured Pruning using Polarization RegularizerNeurIPSFPyTorch(Author)

SCOP: Scientific Control for Reliable Neural Network PruningNeurIPSFPyTorch(Author)

Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement LearningNeurIPSF-

The Generalization-Stability Tradeoff In Neural Network PruningNeurIPSFPyTorch(Author)

Pruning Filter in FilterNeurIPSOtherPyTorch(Author)

Position-based Scaled Gradient for Model Quantization and PruningNeurIPSOtherPyTorch(Author)

Bayesian Bits: Unifying Quantization and PruningNeurIPSOther-

Pruning neural networks without any data by iteratively conserving synaptic flowNeurIPSOtherPyTorch(Author)

EagleEye: Fast Sub-net Evaluation for Efficient Neural Network PruningECCV (Oral)FPyTorch(Author)

DSA: More Efficient Budgeted Pruning via Differentiable Sparsity AllocationECCVF-

DHP: Differentiable Meta Pruning via HyperNetworksECCVFPyTorch(Author)

Meta-Learning with Network PruningECCVW-

Accelerating CNN Training by Pruning Activation GradientsECCVW-

DA-NAS: Data Adapted Pruning for Efficient Neural Architecture SearchECCVOther-

Differentiable Joint Pruning and Quantization for Hardware EfficiencyECCVOther-

Channel Pruning via Automatic Structure SearchIJCAIFPyTorch(Author)

Adversarial Neural Pruning with Latent Vulnerability SuppressionICMLW-

Proving the Lottery Ticket Hypothesis: Pruning is All You NeedICMLW-

Soft Threshold Weight Reparameterization for Learnable SparsityICMLWFPytorch(Author)

Network Pruning by Greedy Subnetwork SelectionICMLF-

Operation-Aware Soft Channel Pruning using Differentiable MasksICMLF-

DropNet: Reducing Neural Network Complexity via Iterative PruningICMLF-

Towards Efficient Model Compression via Learned Global RankingCVPR (Oral)FPytorch(Author)

HRank: Filter Pruning using High-Rank Feature MapCVPR (Oral)FPytorch(Author)

Neural Network Pruning with Residual-Connections and Limited-DataCVPR (Oral)F-

Multi-Dimensional Pruning: A Unified Framework for Model CompressionCVPR (Oral)WF-

DMCP: Differentiable Markov Channel Pruning for Neural NetworksCVPR (Oral)FTensorFlow(Author)

Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network CompressionCVPRFPyTorch(Author)

Few Sample Knowledge Distillation for Efficient Network CompressionCVPRF-

Discrete Model Compression With Resource Constraint for Deep Neural NetworksCVPRF-

Structured Compression by Weight Encryption for Unstructured Pruning and QuantizationCVPRW-

Learning Filter Pruning Criteria for Deep Convolutional Neural Networks AccelerationCVPRF-

APQ: Joint Search for Network Architecture, Pruning and Quantization PolicyCVPRF-

Comparing Rewinding and Fine-tuning in Neural Network PruningICLR (Oral)WFTensorFlow(Author)

A Signal Propagation Perspective for Pruning Neural Networks at InitializationICLR (Spotlight)W-

ProxSGD: Training Structured Neural Networks under Regularization and ConstraintsICLRWTF+PT(Author)

One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum EvaluationICLRW-

Lookahead: A Far-sighted Alternative of Magnitude-based PruningICLRWPyTorch(Author)

Dynamic Model Pruning with FeedbackICLRWF-

Provable Filter Pruning for Efficient Neural NetworksICLRF-

Data-Independent Neural Pruning via CoresetsICLRW-

AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression RatesAAAIF-

DARB: A Density-Aware Regular-Block Pruning for Deep Neural NetworksAAAIOther-

Pruning from ScratchAAAIOther-

Reborn filters: Pruning convolutional neural networks with limited dataAAAIF-

2019

TitleVenueTypeCode

Network Pruning via Transformable Architecture SearchNeurIPSFPyTorch(Author)

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural NetworksNeurIPSFPyTorch(Author)

Deconstructing Lottery Tickets: Zeros, Signs, and the SupermaskNeurIPSWTensorFlow(Author)

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizersNeurIPSW-

Global Sparse Momentum SGD for Pruning Very Deep Neural NetworksNeurIPSWPyTorch(Author)

AutoPrune: Automatic Network Pruning by Regularizing Auxiliary ParametersNeurIPSW-

Model Compression with Adversarial Robustness: A Unified Optimization FrameworkNeurIPSOtherPyTorch(Author)

MetaPruning: Meta Learning for Automatic Neural Network Channel PruningICCVFPyTorch(Author)

Accelerate CNN via Recursive Bayesian PruningICCVF-

Adversarial Robustness vs Model Compression, or Both?ICCVWPyTorch(Author)

Learning Filter Basis for Convolutional Neural Network CompressionICCVOther-

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks AccelerationCVPR (Oral)FPyTorch(Author)

Towards Optimal Structured CNN Pruning via Generative Adversarial LearningCVPRFPyTorch(Author)

Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated StructureCVPRFPyTorch(Author)

On Implicit Filter Level Sparsity in Convolutional Neural NetworksExtension1Extension2CVPRFPyTorch(Author)

Structured Pruning of Neural Networks with Budget-Aware RegularizationCVPRF-

Importance Estimation for Neural Network PruningCVPRFPyTorch(Author)

OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural NetworksCVPRF-

Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture SearchCVPROtherTensorFlow(Author)

Variational Convolutional Neural Network PruningCVPR--

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksICLR (Best)WTensorFlow(Author)

Rethinking the Value of Network PruningICLRFPyTorch(Author)

Dynamic Channel Pruning: Feature Boosting and SuppressionICLRFTensorFlow(Author)

SNIP: Single-shot Network Pruning based on Connection SensitivityICLRWTensorFLow(Author)

Dynamic Sparse Graph for Efficient Deep LearningICLRFCUDA(3rd)

Collaborative Channel Pruning for Deep NetworksICMLF-

Approximated Oracle Filter Pruning for Destructive CNN Width Optimization githubICMLF-

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis4ICMLWPyTorch(Author)

COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level PruningIJCAIFTensorflow(Author)

2018

TitleVenueTypeCode

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution LayersICLRFTensorFlow(Author)PyTorch(3rd)

To prune, or not to prune: exploring the efficacy of pruning for model compressionICLRW-

Discrimination-aware Channel Pruning for Deep Neural NetworksNeurIPSFTensorFlow(Author)

Frequency-Domain Dynamic Pruning for Convolutional Neural NetworksNeurIPSW-

Learning Sparse Neural Networks via Sensitivity-Driven RegularizationNeurIPSWF-

Amc: Automl for model compression and acceleration on mobile devicesECCVFTensorFlow(3rd)

Data-Driven Sparse Structure Selection for Deep Neural NetworksECCVFMXNet(Author)

Coreset-Based Neural Network CompressionECCVFPyTorch(Author)

Constraint-Aware Deep Neural Network CompressionECCVWSkimCaffe(Author)

A Systematic DNN Weight Pruning Framework using Alternating Direction Method of MultipliersECCVWCaffe(Author)

PackNet: Adding Multiple Tasks to a Single Network by Iterative PruningCVPRFPyTorch(Author)

NISP: Pruning Networks using Neuron Importance Score PropagationCVPRF-

CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-QuantizationCVPRW-

“Learning-Compression” Algorithms for Neural Net PruningCVPRW-

Soft Filter Pruning for Accelerating Deep Convolutional Neural NetworksIJCAIFPyTorch(Author)

Accelerating Convolutional Networks via Global & Dynamic Filter PruningIJCAIF-

2017

TitleVenueTypeCode

Pruning Filters for Efficient ConvNetsICLRFPyTorch(3rd)

Pruning Convolutional Neural Networks for Resource Efficient InferenceICLRFTensorFlow(3rd)

Net-Trim: Convex Pruning of Deep Neural Networks with Performance GuaranteeNeurIPSWTensorFlow(Author)

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain SurgeonNeurIPSWPyTorch(Author)

Runtime Neural PruningNeurIPSF-

Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware PruningCVPRF-

ThiNet: A Filter Level Pruning Method for Deep Neural Network CompressionICCVFCaffe(Author)PyTorch(3rd)

Channel pruning for accelerating very deep neural networksICCVFCaffe(Author)

Learning Efficient Convolutional Networks Through Network SlimmingICCVFPyTorch(Author)

2016

TitleVenueTypeCode

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman CodingICLR (Best)WCaffe(Author)

Dynamic Network Surgery for Efficient DNNsNeurIPSWCaffe(Author)

2015

TitleVenueTypeCode

Learning both Weights and Connections for Efficient Neural NetworksNeurIPSWPyTorch(3rd)

Related Repo

Awesome-model-compression-and-acceleration

EfficientDNNs

Embedded-Neural-Network

awesome-AutoML-and-Lightweight-Models

Model-Compression-Papers

knowledge-distillation-papers

Network-Speed-and-Compression

 

原文:https://github.com/he-y/Awesome-Pruning

文章链接