[go: up one dir, main page]

AI Models for ChromeOS

Browse free open source AI Models and projects for ChromeOS below. Use the toggles on the left to filter open source AI Models by OS, license, language, programming language, and project status.

  • All-in-One IT Monitoring - No More Blind Spots Icon
    All-in-One IT Monitoring - No More Blind Spots

    Stop juggling tools. PRTG gives you a complete, real-time view of your IT: servers, devices, cloud, and more - in one easy dashboard.

    Tired of switching between different tools and missing critical alerts? PRTG brings everything together, monitoring your entire IT infrastructure from a single, intuitive interface. Whether it’s servers, switches, printers, or cloud services, you get instant visibility and clear notifications - no technical jargon, no clutter. Set up in minutes, PRTG helps you prevent downtime, reduce stress, and prove your value to your company. Focus on your job, not on chasing issues. Try PRTG and experience true IT peace of mind.
    Get Your Unified IT Trial
  • Professional Email Hosting for Small Business | Greatmail Icon
    Professional Email Hosting for Small Business | Greatmail

    Ready to switch to a more reliable and secure email hosting solution?

    Dependable cloud based email hosting with spam filtering, antivirus protection, generous storage and webmail. Compatible with Outlook and all other POP3/IMAP clients. High volume SMTP service for responsible senders. Outbound relay service for transactional messages, email marketing campaigns, newsletters and other applications. Dedicated email servers, clustering and multiple IP load balancing for high volume senders. Fixed monthly cost with unlimited sending and reputation monitoring. Greatmail is an email service provider (ESP) specializing in business class email hosting, SMTP hosting and email servers. For ISPs, application programmers and cloud developers, we also provide custom solutions including dedicated IP servers and process specific, load balanced configurations with multiple servers.
    Learn More
  • 1
    GLM-4.5

    GLM-4.5

    GLM-4.5: Open-source LLM for intelligent agents by Z.ai

    GLM-4.5 is a cutting-edge open-source large language model designed by Z.ai for intelligent agent applications. The flagship GLM-4.5 model has 355 billion total parameters with 32 billion active parameters, while the compact GLM-4.5-Air version offers 106 billion total parameters and 12 billion active parameters. Both models unify reasoning, coding, and intelligent agent capabilities, providing two modes: a thinking mode for complex reasoning and tool usage, and a non-thinking mode for immediate responses. They are released under the MIT license, allowing commercial use and secondary development. GLM-4.5 achieves strong performance on 12 industry-standard benchmarks, ranking 3rd overall, while GLM-4.5-Air balances competitive results with greater efficiency. The models support FP8 and BF16 precision, and can handle very large context windows of up to 128K tokens. Flexible inference is supported through frameworks like vLLM and SGLang with tool-call and reasoning parsers included.
    Downloads: 598 This Week
    Last Update:
    See Project
  • 2
    GLM-4.6

    GLM-4.6

    Agentic, Reasoning, and Coding (ARC) foundation models

    GLM-4.6 is the latest iteration of Zhipu AI’s foundation model, delivering significant advancements over GLM-4.5. It introduces an extended 200K token context window, enabling more sophisticated long-context reasoning and agentic workflows. The model achieves superior coding performance, excelling in benchmarks and practical coding assistants such as Claude Code, Cline, Roo Code, and Kilo Code. Its reasoning capabilities have been strengthened, including improved tool usage during inference and more effective integration within agent frameworks. GLM-4.6 also enhances writing quality, producing outputs that better align with human preferences and role-playing scenarios. Benchmark evaluations demonstrate that it not only outperforms GLM-4.5 but also rivals leading global models such as DeepSeek-V3.1-Terminus and Claude Sonnet 4.
    Downloads: 436 This Week
    Last Update:
    See Project
  • 3
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 64 This Week
    Last Update:
    See Project
  • 4
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    DeepSeek-R1 is an open-source large language model developed by DeepSeek, designed to excel in complex reasoning tasks across domains such as mathematics, coding, and language. DeepSeek R1 offers unrestricted access for both commercial and academic use. The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens. DeepSeek-R1's training regimen uniquely integrates large-scale reinforcement learning (RL) without relying on supervised fine-tuning, enabling the model to develop advanced reasoning capabilities. This approach has resulted in performance comparable to leading models like OpenAI's o1, while maintaining cost-efficiency. To further support the research community, DeepSeek has released distilled versions of the model based on architectures such as LLaMA and Qwen.
    Downloads: 48 This Week
    Last Update:
    See Project
  • Secure User Management, Made Simple | Frontegg Icon
    Secure User Management, Made Simple | Frontegg

    Get 7,500 MAUs, 50 tenants, and 5 SSOs free – integrated into your app with just a few lines of code.

    Frontegg powers modern businesses with a user management platform that’s fast to deploy and built to scale. Embed SSO, multi-tenancy, and a customer-facing admin portal using robust SDKs and APIs – no complex setup required. Designed for the Product-Led Growth era, it simplifies setup, secures your users, and frees your team to innovate. From startups to enterprises, Frontegg delivers enterprise-grade tools at zero cost to start. Kick off today.
    Start for Free
  • 5
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    DeepSeek-V3 is a robust Mixture-of-Experts (MoE) language model developed by DeepSeek, featuring a total of 671 billion parameters, with 37 billion activated per token. It employs Multi-head Latent Attention (MLA) and the DeepSeekMoE architecture to enhance computational efficiency. The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance. Trained on 14.8 trillion diverse, high-quality tokens, DeepSeek-V3 underwent supervised fine-tuning and reinforcement learning to fully realize its capabilities. Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 36 This Week
    Last Update:
    See Project
  • 6
    HunyuanWorld-Voyager

    HunyuanWorld-Voyager

    RGBD video generation model conditioned on camera input

    HunyuanWorld-Voyager is a next-generation video diffusion framework developed by Tencent-Hunyuan for generating world-consistent 3D scene videos from a single input image. By leveraging user-defined camera paths, it enables immersive scene exploration and supports controllable video synthesis with high realism. The system jointly produces aligned RGB and depth video sequences, making it directly applicable to 3D reconstruction tasks. At its core, Voyager integrates a world-consistent video diffusion model with an efficient long-range world exploration engine powered by auto-regressive inference. To support training, the team built a scalable data engine that automatically curates large video datasets with camera pose estimation and metric depth prediction. As a result, Voyager delivers state-of-the-art performance on world exploration benchmarks while maintaining photometric, style, and 3D consistency.
    Downloads: 33 This Week
    Last Update:
    See Project
  • 7
    DINOv3

    DINOv3

    Reference PyTorch implementation and models for DINOv3

    DINOv3 is the third-generation iteration of Meta’s self-supervised visual representation learning framework, building upon the ideas from DINO and DINOv2. It continues the paradigm of learning strong image representations without labels using teacher–student distillation, but introduces a simplified and more scalable training recipe that performs well across datasets and architectures. DINOv3 removes the need for complex augmentations or momentum encoders, streamlining the pipeline while maintaining or improving feature quality. The model supports multiple backbone architectures, including Vision Transformers (ViT), and can handle larger image resolutions with improved stability during training. The learned embeddings generalize robustly across tasks like classification, retrieval, and segmentation without fine-tuning, showing state-of-the-art transfer performance among self-supervised models.
    Downloads: 16 This Week
    Last Update:
    See Project
  • 8
    DeepSeek-V3.2-Exp

    DeepSeek-V3.2-Exp

    An experimental version of DeepSeek model

    DeepSeek-V3.2-Exp is an experimental release of the DeepSeek model family, intended as a stepping stone toward the next generation architecture. The key innovation in this version is DeepSeek Sparse Attention (DSA), a sparse attention mechanism that aims to optimize training and inference efficiency in long-context settings without degrading output quality. According to the authors, they aligned the training setup of V3.2-Exp with V3.1-Terminus so that benchmark results remain largely comparable, even though the internal attention mechanism changes. In public evaluations across a variety of reasoning, code, and question-answering benchmarks (e.g. MMLU, LiveCodeBench, AIME, Codeforces, etc.), V3.2-Exp shows performance very close to or in some cases matching that of V3.1-Terminus. The repository includes tools and kernels to support the new sparse architecture—for instance, CUDA kernels, logit indexers, and open-source modules like FlashMLA and DeepGEMM are invoked for performance.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 9
    Hunyuan3D-2.1

    Hunyuan3D-2.1

    From Images to High-Fidelity 3D Assets

    Hunyuan3D-2.1 is Tencent Hunyuan’s advanced 3D asset generation system that produces high-fidelity 3D models with Physically Based Rendering (PBR) textures. It is fully open-source with released model weights, training, and inference code. It improves on prior versions by using a PBR texture pipeline (enabling realistic material effects like reflections and subsurface scattering) and allowing community fine-tuning and extension. It supports both shape generation (mesh geometry) and texture generation modules. Physically Based Rendering texture synthesis to model realistic material effects, including reflections, subsurface scattering, etc. Cross-platform support (MacOS, Windows, Linux) via Python / PyTorch, including diffusers-style APIs.
    Downloads: 10 This Week
    Last Update:
    See Project
  • Deliver trusted data with dbt Icon
    Deliver trusted data with dbt

    dbt Labs empowers data teams to build reliable, governed data pipelines—accelerating analytics and AI initiatives with speed and confidence.

    Data teams use dbt to codify business logic and make it accessible to the entire organization—for use in reporting, ML modeling, and operational workflows.
    Learn More
  • 10
    HunyuanImage-3.0

    HunyuanImage-3.0

    A Powerful Native Multimodal Model for Image Generation

    HunyuanImage-3.0 is a powerful, native multimodal text-to-image generation model released by Tencent’s Hunyuan team. It unifies multimodal understanding and generation in a single autoregressive framework, combining text and image modalities seamlessly rather than relying on separate image-only diffusion components. It uses a Mixture-of-Experts (MoE) architecture with many expert subnetworks to scale efficiently, deploying only a subset of experts per token, which allows large parameter counts without linear inference cost explosion. The model is intended to be competitive with closed-source image generation systems, aiming for high fidelity, prompt adherence, fine detail, and even “world knowledge” reasoning (i.e. leveraging context, semantics, or common sense in generation). The GitHub repo includes code, scripts, model loading instructions, inference utilities, prompt handling, and integration with standard ML tooling (e.g. Hugging Face / Transformers).
    Downloads: 9 This Week
    Last Update:
    See Project
  • 11
    HunyuanCustom

    HunyuanCustom

    Multimodal-Driven Architecture for Customized Video Generation

    HunyuanCustom is a multimodal video customization framework by Tencent Hunyuan, aimed at generating customized videos featuring particular subjects (people, characters) under flexible conditions, while maintaining subject/identity consistency. It supports conditioning via image, audio, video, and text, and can perform subject replacement in videos, generate avatars speaking given audio, or combine multiple subject images. The architecture builds on HunyuanVideo, with added modules for identity reinforcement and modality-specific condition injection. Text-image fusion module based on LLaVA for improved multimodal understanding. Applicable to single- and multi-subject scenarios, video editing/replacement, singing avatars etc.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 12
    HunyuanVideo-Avatar

    HunyuanVideo-Avatar

    Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model

    HunyuanVideo-Avatar is a multimodal diffusion transformer (MM-DiT) model by Tencent Hunyuan for animating static avatar images into dynamic, emotion-controllable, and multi-character dialogue videos, conditioned on audio. It addresses challenges of motion realism, identity consistency, and emotional alignment. Innovations include a character image injection module, an Audio Emotion Module for transferring emotion cues, and a Face-Aware Audio Adapter to isolate audio effects on faces, enabling multiple characters to be animated in a scene. Character image injection module for better consistency between training and inference conditioning. Emotion control by extracting emotion reference images and transferring emotional style into video sequences.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 13
    PRM800K

    PRM800K

    800,000 step-level correctness labels on LLM solutions to MATH problem

    PRM800K is a process supervision dataset accompanying the paper Let’s Verify Step by Step, providing 800,000 step-level correctness labels on model-generated solutions to problems from the MATH dataset. The repository releases the raw labels and the labeler instructions used in two project phases, enabling researchers to study how human raters graded intermediate reasoning. Data are stored as newline-delimited JSONL files tracked with Git LFS, where each line is a full solution sample that can contain many step-level labels and rich metadata such as labeler UUIDs, timestamps, generation identifiers, and quality-control flags. Each labeled step can include multiple candidate completions with ratings of -1, 0, or +1, optional human-written corrections (phase 1), and a chosen completion index, along with a final finish reason such as found_error, solution, bad_problem, or give_up.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 14
    Tencent-Hunyuan-Large

    Tencent-Hunyuan-Large

    Open-source large language model family from Tencent Hunyuan

    Tencent-Hunyuan-Large is the flagship open-source large language model family from Tencent Hunyuan, offering both pre-trained and instruct (fine-tuned) variants. It is designed with long-context capabilities, quantization support, and high performance on benchmarks across general reasoning, mathematics, language understanding, and Chinese / multilingual tasks. It aims to provide competitive capability with efficient deployment and inference. FP8 quantization support to reduce memory usage (~50%) while maintaining precision. High benchmarking performance on tasks like MMLU, MATH, CMMLU, C-Eval, etc.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 15
    Stable Diffusion

    Stable Diffusion

    High-Resolution Image Synthesis with Latent Diffusion Models

    Stable Diffusion Version 2. The Stable Diffusion project, developed by Stability AI, is a cutting-edge image synthesis model that utilizes latent diffusion techniques for high-resolution image generation. It offers an advanced method of generating images based on text input, making it highly flexible for various creative applications. The repository contains pretrained models, various checkpoints, and tools to facilitate image generation tasks, such as fine-tuning and modifying the models. Stability AI's approach to image synthesis has contributed to creating detailed, scalable images while maintaining efficiency.
    Downloads: 61 This Week
    Last Update:
    See Project
  • 16
    DeepSeek Coder V2

    DeepSeek Coder V2

    DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models

    DeepSeek-Coder-V2 is the version-2 iteration of DeepSeek’s code generation models, refining the original DeepSeek-Coder line with improved architecture, training strategies, and benchmark performance. While the V1 models already targeted strong code understanding and generation, V2 appears to push further in both multilingual support and reasoning in code, likely via architectural enhancements or additional training objectives. The repository provides updated model weights, evaluation results on benchmarks (e.g. HumanEval, MultiPL-E, APPS), and new inference/serving scripts. Compared to the original, DeepSeek-Coder-V2 likely incorporates improved context management, caching strategies, or enhanced infilling capabilities. The project aims to provide a more performant and reliable open-source alternative to closed-source code models, optimized for practical usage in code completion, infilling, and code understanding across English and Chinese codebases.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 17
    DeepSeek Coder

    DeepSeek Coder

    DeepSeek Coder: Let the Code Write Itself

    DeepSeek-Coder is a series of code-specialized language models designed to generate, complete, and infill code (and mixed code + natural language) with high fluency in both English and Chinese. The models are trained from scratch on a massive corpus (~2 trillion tokens), of which about 87% is code and 13% is natural language. This dataset covers project-level code structure (not just line-by-line snippets), using a large context window (e.g. 16K) and a secondary fill-in-the-blank objective to encourage better contextual completions and infilling. Multiple sizes of the model are offered (e.g. 1B, 5.7B, 6.7B, 33B) so users can trade off inference cost vs capability. The repo provides model weights, documentation on training setup, evaluation results on common benchmarks (HumanEval, MultiPL-E, APPS, etc.), and inference tools.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 18
    Style Aligned

    Style Aligned

    Official code for Style Aligned Image Generation via Shared Attention

    StyleAligned is a diffusion-model editing technique and codebase that preserves the visual “style” of an original image while applying new semantic edits driven by text. Instead of fully re-generating an image—and risking changes to lighting, texture, or rendering choices—the method aligns internal features across denoising steps so the target edit inherits the source style. This alignment acts like a constraint on the model’s evolution, steering composition, palette, and brushwork even as objects or attributes change. The result is more consistent edits across a set, which is crucial for workflows like product variations, character sheets, or brand-coherent art. The repository provides reproducible scripts, reference prompts, and guidance for tuning strengths so users can dial in subtle retouches or bolder substitutions. Because it builds on widely used diffusion checkpoints, creators can integrate it without training or dataset collection.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 19
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    DeepSeek-MoE (“DeepSeek MoE”) is the DeepSeek open implementation of a Mixture-of-Experts (MoE) model architecture meant to increase parameter efficiency by activating only a subset of “expert” submodules per input. The repository introduces fine-grained expert segmentation and shared expert isolation to improve specialization while controlling compute cost. For example, their MoE variant with 16.4B parameters claims comparable or better performance to standard dense models like DeepSeek 7B or LLaMA2 7B using about 40% of the total compute. The repo publishes both Base and Chat variants of the 16B MoE model (deepseek-moe-16b) and provides evaluation results across benchmarks. It also includes a quick start with inference instructions (using Hugging Face Transformers) and guidance on fine-tuning (DeepSpeed, hyperparameters, quantization). The licensing is MIT for code, with a “Model License” applied to the models.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 20
    DeepSeek V2

    DeepSeek V2

    Strong, Economical, and Efficient Mixture-of-Experts Language Model

    DeepSeek-V2 is the second major iteration of DeepSeek’s foundation language model (LLM) series. This version likely includes architectural improvements, training enhancements, and expanded dataset coverage compared to V1. The repository includes model weight artifacts, evaluation benchmarks across a broad suite (e.g. reasoning, math, multilingual), configuration files, and possibly tokenization / inference scripts. The V2 model is expected to support more advanced features like better context window handling, more efficient inference, better performance on challenging tasks, and stronger alignment with human feedback. Because DeepSeek is pushing open-weight competition, this V2 iteration is meant to solidify its position in benchmark rankings and in developer adoption. The code in the repository may include description files, support for tool use or plug-in architectures, and artifacts showing fine-tuning or prompt templates.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    Depth Pro

    Depth Pro

    Sharp Monocular Metric Depth in Less Than a Second

    Depth Pro is a foundation model for zero-shot metric monocular depth estimation, producing sharp, high-frequency depth maps with absolute scale from a single image. Unlike many prior approaches, it does not require camera intrinsics or extra metadata, yet still outputs metric depth suitable for downstream 3D tasks. Apple highlights both accuracy and speed: the model can synthesize a ~2.25-megapixel depth map in around 0.3 seconds on a standard GPU, enabling near real-time applications. The repo and research page emphasize boundary fidelity and crisp geometry, addressing a common weakness in monocular depth where edges can blur. Community integrations (e.g., inference wrappers and UI nodes) have sprung up around the model, reflecting practical interest in video, AR, and generative pipelines. As a general-purpose monocular depth backbone, Depth Pro slots into 3D reconstruction, relighting, and scene understanding workflows that benefit from metric predictions.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 22
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 23
    Hunyuan3D-1

    Hunyuan3D-1

    A Unified Framework for Text-to-3D and Image-to-3D Generation

    Hunyuan3D-1 is an earlier version in the same 3D generation line (the unified framework for text-to-3D and image-to-3D tasks) by Tencent Hunyuan. It provides a framework combining shape generation and texture synthesis, enabling users to create 3D assets from images or text conditions. While less advanced than version 2.1, it laid the foundations for the later PBR, higher resolution, and open-source enhancements. (Note: less detailed public documentation was found for Hunyuan3D-1 compared to 2.1.). Community and ecosystem support (e.g. usage via Blender addon for geometry/texture). Integration into user-friendly tools/platforms.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    HunyuanDiT

    HunyuanDiT

    Diffusion Transformer with Fine-Grained Chinese Understanding

    HunyuanDiT is a high-capability text-to-image diffusion transformer with bilingual (Chinese/English) understanding and multi-turn dialogue capability. It trains a diffusion model in latent space using a transformer backbone and integrates a Multimodal Large Language Model (MLLM) to refine captions and support conversational image generation. It supports adapters like ControlNet, IP-Adapter, LoRA, and can run under constrained VRAM via distillation versions. LoRA, ControlNet (pose, depth, canny), IP-adapter to extend control over generation. Integration with Gradio for web demos and diffusers / command-line compatibility. Supports multi-turn T2I (text-to-image) interactions so users can iteratively refine their images via dialogue.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 25
    HunyuanVideo-I2V

    HunyuanVideo-I2V

    A Customizable Image-to-Video Model based on HunyuanVideo

    HunyuanVideo-I2V is a customizable image-to-video generation framework from Tencent Hunyuan, built on their HunyuanVideo foundation. It extends video generation so that given a static reference image plus an optional prompt, it generates a video sequence that preserves the reference image’s identity (especially in the first frame) and allows stylized effects via LoRA adapters. The repository includes pretrained weights, inference and sampling scripts, training code for LoRA effects, and support for parallel inference via xDiT. Resolution, video length, stability mode, flow shift, seed, CPU offload etc. Parallel inference support using xDiT for multi-GPU speedups. LoRA training / fine-tuning support to add special effects or customize generation.
    Downloads: 4 This Week
    Last Update:
    See Project