The article introduces the Massive Legal Embedding Benchmark (MLEB), a comprehensive benchmark designed for evaluating legal text embedding models across diverse jurisdictions and document types. MLEB aims to address the limitations of existing benchmarks by providing high-quality datasets that require strong legal reasoning skills, and it features the newly released Kanon 2 Embedder model, which outperforms competitors in accuracy and efficiency.