Skip to content

Persdre/NeurIPS-2024-LLM-Papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

NeurIPS-2024-LLM-Papers

LLM papers in NeurIPS 2024

(Abstracts and arxiv links will be uploaded soon.)

Title
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models
Exploring Context Window of Large Language Models via Decomposed Positional Vectors
FLAME : Factuality-Aware Alignment for Large Language Models
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in LLMs
Aligning Large Language Models with Representation Editing: A Control Perspective
GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations
Alignment at Pre-training! Towards Native Alignment for Arabic LLMs
Automated Multi-level Preference for MLLMs
DDK: Distilling Domain Knowledge for Efficient Large Language Models
LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation
Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning
Does Reasoning Emerge? Examining the Probabilities of Causation in Large Language Models
Advancing Cross-domain Discriminability in Continual Learning of Vison-Language Models
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models
InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory
Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control
UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models
Never Miss A Beat: An Efficient Recipe for Context Window Extension of Large Language Models with Consistent “Middle” Enhancement
Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models
Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models
Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models
MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability
Tree of Attacks: Jailbreaking Black-Box LLMs Automatically
Learning Goal-Conditioned Representations in Reward Models for Aligning Language Models
Unveiling Encoder-Free Vision-Language Models
SpeechAlign: Speech Language Models Can Self-Improve via Preference Optimization
Multimodal Large Language Models Make Text-to-Image Generative Models Align Better
SpecExec: Massively Parallel Speculative Decoding For Interactive LLM Inference on Consumer Devices
Human-Readable Fingerprint for Large Language Models
Mixture of In-Context Experts Enhance LLMs' Long Context Awareness
Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization
Leveraging Environment Interaction for Automated PDDL Generation and Planning with Large Language Models
LACIE: Listener-Aware Finetuning for Calibration in Large Language Models
Large language model validity via enhanced conformal prediction methods
Exploiting LLM Quantization
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
Understanding Emergent Abilities of Language Models from the Loss Perspective
LLM Circuit Analyses Are Consistent Across Training and Scale
Boosting Text-to-Video Generative Model with MLLMs Feedback
Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
You Only Cache Once: Decoder-Decoder Architectures for Language Models
Delving into the Reversal Curse: How Far Can Large Language Models Generalize?
ProSST: Protein Language Modeling with Quantized Structure and Disentangled Attention
Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation
LLM Evaluators Recognize and Favor Their Own Generations
Distributional Preference Alignment of LLMs via Optimal Transport
Risk-Averse Finetuning of Large Language Models
Truth is Universal: Robust Detection of Lies in LLMs
Large Language Models as Hyper-Heuristics for Combinatorial Optimization
Rethinking Memory and Communication Costs for Efficient Large Language Model Training
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models
AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment
Compute-efficient LLM Training via Online Batch Selection
Can LLMs Learn by Teaching? A Preliminary Study
Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective
TableRAG: Million-Token Tabular Reasoning with Large Language Models
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Linguistic Collapse: Neural Collapse in (Large) Language Models
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models
Decoding-Time Language Model Alignment with Multiple Objectives
Confidence Regulation Neurons in Language Models
Approaching Human-Level Forecasting with Language Models
BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models
Mobility-LLM: Learning Visiting Intentions and Travel Preference from Human Mobility Data with Large Language Models
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models
LLMs as Zero-shot Graph Learners: Alignment of GNN Represetantions with LLM Token Embeddings
Long-form factuality in large language models
Cooperate or Collapse: Emergence of Sustainability in a Society of LLM Agents
AutoTimes: Autoregressive Time Series Forecasters via Large Language Models
Probing the Decision Boundaries of In-context Learning in Large Language Models
Bridge the Modality and Capacity Gaps in Vision-Language Model Selection
A Unified Debiasing Approach for Vision-Language Models across Modalities and Tasks
Unleash Region Understanding in Intermediate Layers for MLLM-based Referring Expression Generation
A Critical Evaluation of AI Feedback for Aligning Large Language Models
Enhancing Large Language Models through Adaptive Tokenizers
Adversarial Moment-Matching Distillation of Large Language Models
LLM Dataset Inference: Detect Datasets, not Strings
MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization
Unveiling Causal Reasoning in Large Language Models: Reality or Mirage?
Are Language Models Actually Useful for Time Series Forecasting?
Grounding Multimodal Large Language Models in Actions
Invariant Tokenization for Language Model Enabled Crystal Materials Generation
FlowLLM: Flow Matching for Material Generation with Learned Base Distributions
When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search
SlowFocus: Enhancing Fine-grained Temporal Understanding in Video LLM
Resolving Discrepancies in Compute-Optimal Scaling of Language Models
Enhancing Large Vision Language Models with Self-Training on Image Comprehension
AutoSurvey: Large Language Models Can Automatically Write Surveys
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training
Segmenting Watermarked Texts From Language Models
The Best of Both Worlds: Toward an Honest and Helpful Large Language Model
Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
MDAgents: An Adaptive Collaboration of LLMs for Medical Decision Making
Divergences between Language Models and Human Brains
GPT as Visual Explainer
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation
Aligning LLM Agents by Learning Latent Preference from User Edits
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models
MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering
DDK: Distilling Domain Knowledge for Efficient Large Language Models
Interpreting Learned Feedback Patterns in Large Language Models
Is Programming by Example solved by LLMs?
LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation
Co-occurrence is not Factual Association in Language Models
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates
RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models
Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization
MediQ: Question-Asking LLMs for Adaptive and Reliable Medical Reasoning
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health
StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving
Uncovering Safety Risks of Large Language Models through Concept Activation Vector
WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
LLM-AutoDA: Large Language Model-Driven Automatic Data Augmentation for Long-tailed Problems
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
AGILE: A Novel Framework of LLM Agent
ALPINE: Unveiling The Planning Capability of Autoregressive Learning in Language Models
GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing
Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach
Large Scale Transfer Learning for Tabular Data via Language Modeling
Stepwise Alignment for Constrained Language Model Policy Optimization
End-to-End Ontology Learning with Large Language Models
WizardArena: Post-training Large Language Models via Simulated Offline Chatbot Arena
Ad Auctions for LLMs via Retrieval Augmented Generation
LLaMo: Large Language Model-based Molecular Graph Assistant
D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models
LLM-based Skill Diffusion for Zero-shot Policy Adaptation
SGLang: Efficient Execution of Structured Language Model Programs
NoiseGPT: Label Noise Detection and Rectification through Probability Curvature
RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models
Pretrained Large Language Models Use Fourier Features to Compute Addition
WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Meaningful Learning: Enhancing Abstract Reasoning in Large Language Models via Generic Fact Guidance
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models
GraphVis: Boosting LLMs with Visual Knowledge Graph Integration
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models
Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models
BonBon Alignment for Large Language Models: on the Sweetness of Best-of-n Sampling
Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models
Vision-Language Models are Strong Noisy Label Detectors
Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models
Online Adaptation of Language Models with a Memory of Amortized Contexts
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations
Information Re-Organization Improves Reasoning in Large Language Models
Large Language Models Must Be Taught to Know What They Don’t Know
Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization
MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability
The AlCHEmist: Automated Labeling 500x CHEaper than LLM Data Annotators
Unveiling Encoder-Free Vision-Language Models
Déjà Vu Memorization in Vision–Language Models
Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning
LLMDFA: Analyzing Dataflow in Code with Large Language Models
Mixture of In-Context Experts Enhance LLMs' Long Context Awareness
Leveraging Environment Interaction for Automated PDDL Generation and Planning with Large Language Models
Learning Goal-Conditioned Representations in Reward Models for Aligning Language Models
Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models
Tree of Attacks: Jailbreaking Black-Box LLMs Automatically
SpeechAlign: Speech Language Models Can Self-Improve via Preference Optimization
LACIE: Listener-Aware Finetuning for Calibration in Large Language Models
RankRAG: Unifying Retrieval-Augmented Generation and Context Ranking in LLMs
Deep Bayesian Active Learning for Preference Modeling in Large Language Models
Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters
ConStat: Performance-Based Contamination Detection in Large Language Models
Human-Readable Fingerprint for Large Language Models
Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning
MQT-LLaVA: Matryoshka Query Transformer for Large Vision-Language Models
Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search
Compact Language Models via Pruning and Knowledge Distillation
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization
Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy
Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs
ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
CogVLM: Visual Expert for Pretrained Language Models
EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas
Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration
Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models
Large Language Model Unlearning
Efficient Adversarial Training in LLMs with Continuous Attacks
When and How Does Synthetic Data Improve Reasoning Capabilities of Language Models?
Improved Generation of Adversarial Examples Against Safety-aligned LLMs
Slot-VLM: Object-Event Slots for Video-Language Modeling
Block Transformer: Global-to-Local Language Modeling for Fast Inference
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models
HYSYNTH: Context-Free LLM Approximation for Guiding Program Synthesis
Stratified Prediction-Powered Inference for Effective Hybrid Evaluation of Language Models
HENASY: Learning to Assemble Scene-Entities for Interpretable Egocentric Video-Language Model
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
Prediction-Powered Ranking of Large Language Models
Adversarial Moment-Matching Distillation of Large Language Models
How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers
RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models
MixEval: Fast and Dynamic Human Preference Approximation with LLM Benchmark Mixtures
Calibrated Preference Optimization for Direct Language Model Alignment

About

Accepted LLM Papers in NeurIPS 2024

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published