6 links
tagged with all of: hugging-face + machine-learning
Click any tag below to further narrow down your results
Links
The Smol Training Playbook on Hugging Face provides a comprehensive guide for efficiently training machine learning models using the Hugging Face ecosystem. It emphasizes best practices and methodologies for optimizing training processes, making it accessible for both beginners and experienced practitioners. The playbook also includes practical examples and resources to enhance the learning experience.
Trackio is a new open-source experiment tracking library from Hugging Face that simplifies the process of tracking metrics during machine learning model training. It features a local dashboard, seamless integration with Hugging Face Spaces for easy sharing, and compatibility with existing libraries like wandb, allowing users to adopt it with minimal changes to their code.
HiDream-I1 is an open-source image generative foundation model boasting 17 billion parameters, delivering high-quality image generation in seconds. Its recent updates include the release of various models and integrations with popular platforms, enhancing its usability for developers and users alike. For full capabilities, users can explore additional resources and demos linked in the article.
ZeroGPU enables efficient use of Nvidia H200 hardware in Hugging Face Spaces by allowing users to avoid keeping GPUs locked during idle periods. The article discusses how ahead-of-time (AoT) compilation with PyTorch can significantly enhance performance, reducing processing time for generating images and videos with speedups of 1.3x to 1.8x. It also provides a guide on implementing AoT compilation in ZeroGPU Spaces, including advanced techniques like FP8 quantization.
Hugging Face has launched AI Sheets, a no-code tool that simplifies the process of building, enriching, and transforming datasets using open AI models. The user-friendly interface allows users to easily experiment with datasets, generate synthetic data, and refine prompts by providing feedback directly within the tool. It supports both local and cloud deployment, making it accessible for various use cases.
SINQ is a fast and model-agnostic quantization technique that enables the deployment of large language models on GPUs with limited memory while maintaining accuracy. It significantly reduces memory requirements and quantization time, offering improved model quality compared to existing methods. The technique introduces dual scaling to enhance quantization stability, allowing users to quantize models quickly and efficiently.