Results
Submissions: 34 (69% of accepted papers)
Evaluation Results:
- 34 Artifact Available
- 31 Artifact Functional
- 25 Results Reproduced
Paper | Avail. | Funct. | Repro. | Available At |
---|---|---|---|---|
Anon: an FPGA-Based Collective Engine for Distributed Applications |
|
Repository |
||
Anvil: Verifying Liveness of Cluster Management Controllers |
|
|
|
Repository |
Automatic and Efficient Customization of Neural Networks for ML Applications |
|
|
|
Repository |
Beaver: Practical Partial Snapshots for Distributed Cloud Services |
|
|
|
Repository |
Bitter: Enabling Efficient Low-Precision Deep Learning Computing through Hardware-aware Tensor Transformation |
|
|
|
Repository |
Caravan: Practical Online Learning of In-Network ML Models with Labeling Agents |
|
|
|
Repository |
Chop Chop: Byzantine Atomic Broadcast to the Network Limit |
|
|
Repository |
|
Cuber: Constraint-Guided Parallelization Plan Generation for Deep Learning Training |
|
|
Repository |
|
DRust: Language-Guided Distributed Shared Memory with Fine Granularity, Full Transparency, and Ultra Efficiency |
|
|
|
Repository |
DSig: Breaking the Barrier of Signatures in Data Centers |
|
|
|
Repository |
Data-flow Availability: Achieving Timing Assurance on Autonomous Systems |
|
|
|
Repository |
Detecting Logic Bugs in Database Engines via Equivalent Expression Transformation |
|
|
|
Repository |
DistLLM: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving |
|
|
|
Repository |
Enabling Tensor Language Model to Assist in Generating High-Performance Tensor Programs for Deep Learning |
|
|
|
Repository |
Fairness in Serving Large Language Models |
|
|
Repository |
|
Flock: A Framework for Deploying On-Demand Distributed Trust |
|
|
|
Repository |
Inductive Invariants That Spark Joy: Using Invariant Taxonomies to Streamline Distributed Systems Proofs |
|
|
|
Repository |
InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management |
|
|
|
Repository |
IntOS: Persistent Embedded Operating System and Language Support for Multi-threaded Intermittent Computing |
|
|
Repository |
|
IronSpec: Increasing the Reliability of Formal Specifications |
|
|
|
Repository |
Llumnix: Dynamic Scheduling for Large Language Model Serving |
|
|
|
Repository |
Managing Memory Tiers with CXL in Virtualized Environments |
|
Repository |
||
Nomad: Non-Exclusive Memory Tiering via Transactional Page Migration |
|
|
|
Repository |
Parrot: Efficient Serving of LLM-based Applications with Semantic Variable |
|
|
|
Repository |
Performance Interfaces for Hardware Accelerators |
|
|
|
Repository |
Sabre: Improving Memory Prefetching in Serverless MicroVMs with Near-Memory Hardware-Accelerated Compression |
|
|
|
Repository |
Secret Key Recovery in a Global-Scale End-to-End Encryption System |
|
|
|
Repository |
SquirrelFS: using the Rust compiler to check file-system crash consistency |
|
|
Repository |
|
Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve |
|
|
|
Repository |
USHER: Holistic Interference Avoidance for Resource Optimized ML Inference |
|
Repository |
||
VeriSMo: A Verified Security Module for Confidential VMs |
|
|
Repository |
|
What will it take for Johnny to know when his Cloud job will finish? Towards providing reliable job completion time predictions using PCS |
|
|
|
Repository |
dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving |
|
|
|
Repository |
𝜇Slope: High Compression and Fast Search on Semi-Structured Logs |
|
|
|
Artifact |