[go: up one dir, main page]

Showing 324 open source projects for "nvidia"

View related business solutions
  • Relax: PRTG Monitors Your IT for You Icon
    Relax: PRTG Monitors Your IT for You

    Stay in control and avoid IT headaches. PRTG monitors your network, devices, and apps - receive alerts when it matters most.

    You’re the go-to IT person, always putting out fires and keeping things running. With PRTG, you get reliable alerts to monitor your entire IT infrastructure, without the noise. Our intuitive setup gives you a clear overview of your network, devices, and applications in real time. Get instant alerts only when something needs your attention, whether you’re at your desk or on the move. Spend less time worrying about outages and more time focusing on what matters. Set up PRTG once and let it work for you - PRTG has you covered.
    Start Your Free PRTG Trial Now
  • Find out just how much your login box can do for your customer | Auth0 Icon
    Find out just how much your login box can do for your customer | Auth0

    With over 53 social login options, you can fast-track the signup and login experience for users.

    From improving customer experience through seamless sign-on to making MFA as easy as a click of a button – your login box must find the right balance between user convenience, privacy and security.
    Sign up
  • 1
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 2
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration. NVIDIA FLARE is built on a componentized architecture that allows you to take federated learning...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    NVIDIA AgentIQ

    NVIDIA AgentIQ

    The NVIDIA AgentIQ toolkit is an open-source library

    NVIDIA AgentIQ is an open-source toolkit designed to efficiently connect, evaluate, and accelerate teams of AI agents. It provides a framework-agnostic platform that integrates seamlessly with various data sources and tools, enabling developers to build composable and reusable agentic workflows. By treating agents, tools, and workflows as simple function calls, AgentIQ facilitates rapid development and optimization of AI-driven applications, enhancing collaboration and efficiency in complex...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    NVIDIA Merlin is an open-source library that accelerates recommender systems on NVIDIA GPUs. The library enables data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools to address common feature engineering, training, and inference challenges. Each stage of the Merlin pipeline is optimized to support hundreds of terabytes of data, which is all accessible through easy-to-use APIs. For more information, see NVIDIA Merlin...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Create and run cloud-based virtual machines. Icon
    Create and run cloud-based virtual machines.

    Secure and customizable compute service that lets you create and run virtual machines.

    Computing infrastructure in predefined or custom machine sizes to accelerate your cloud transformation. General purpose (E2, N1, N2, N2D) machines provide a good balance of price and performance. Compute optimized (C2) machines offer high-end vCPU performance for compute-intensive workloads. Memory optimized (M2) machines offer the highest memory and are great for in-memory databases. Accelerator optimized (A2) machines are based on the A100 GPU, for very demanding applications.
    Try for free
  • 5
    NVIDIA GPU Exporter

    NVIDIA GPU Exporter

    Nvidia GPU exporter for prometheus using nvidia-smi binary

    Nvidia GPU exporter for prometheus, using nvidia-smi binary to gather metrics. There are many Nvidia GPU exporters out there however they have problems such as not being maintained, not providing pre-built binaries, having a dependency to Linux and/or Docker, targeting enterprise setups (DCGM) and so on.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 6
    NVIDIA Linux Open GPU Kernel Module

    NVIDIA Linux Open GPU Kernel Module

    NVIDIA Linux open GPU kernel module source

    This is the source release of the NVIDIA Linux open GPU kernel modules, version 530.41.03. Note that the kernel modules built here must be used with GSP firmware and user-space NVIDIA GPU driver components from a corresponding 530.41.03 driver release. Currently, the kernel modules can be built for x86_64 or aarch64. If cross-compiling, set these variables on the make command line. Any reasonably modern version of GCC or Clang can be used to build the kernel modules. Note that the kernel...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    NVIDIA Isaac Sim

    NVIDIA Isaac Sim

    NVIDIA Isaac Sim is an open-source application on NVIDIA Omniverse

    NVIDIA Isaac Sim is a high-fidelity robotics simulation platform built on NVIDIA Omniverse to develop, test, and validate AI-driven robots in physically accurate virtual environments. It supports a wide array of robotics formats (URDF, MJCF, CAD), includes GPU-accelerated physics, and features immersive RTX rendering and multisensory simulation. Realistic physics via GPU-accelerated engines and RTX ray tracing. Multi-sensor simulation (RGB-D cameras, Lidar, Radar, IMU, contact sensors...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    NVIDIA device plugin for Kubernetes

    NVIDIA device plugin for Kubernetes

    NVIDIA device plugin for Kubernetes

    The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically Expose the number of GPUs on each node of your cluster. Keep track of the health of your GPUs. Run GPU-enabled containers in your Kubernetes cluster.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    NVIDIA Isaac Lab

    NVIDIA Isaac Lab

    Unified framework for robot learning built on NVIDIA Isaac Sim

    Isaac Lab is an open-source modular robotics learning framework built atop Isaac Sim. It simplifies research workflows across reinforcement learning, imitation learning, and motion planning by offering robust, GPU-accelerated simulation with realistic sensor and physics fidelity—ideal for sim-to-real robot training. Compatible and optimized for use with Isaac Sim versions (e.g., Sim 5.0 and 4.5). GPU-accelerated, high-fidelity physics and sensor simulation suitable for complex learning...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Fully managed relational database service for MySQL, PostgreSQL, and SQL Server Icon
    Fully managed relational database service for MySQL, PostgreSQL, and SQL Server

    Focus on your application, and leave the database to us

    Cloud SQL manages your databases so you don't have to, so your business can run without disruption. It automates all your backups, replication, patches, encryption, and storage capacity increases to give your applications the reliability, scalability, and security they need.
    Try for free
  • 10
    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T N1.5 is the world's first open foundation model

    NVIDIA Isaac‑GR00T N1.5 is an open-source foundation model engineered for generalized humanoid robot reasoning and manipulation skills. It accepts multimodal inputs—such as language and images—and uses a diffusion transformer architecture built upon vision-language encoders, enabling adaptive robot behaviors across diverse environments. It is designed to be customizable via post-training with real or synthetic data. The vision-language model remains frozen during both pretraining and finetuning...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    NVIDIA GPU Operator

    NVIDIA GPU Operator

    NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes

    Kubernetes provides access to special hardware resources such as NVIDIA GPUs, NICs, Infiniband adapters and other devices through the device plugin framework. However, configuring and managing nodes with these hardware resources requires the configuration of multiple software components such as drivers, container runtimes or other libraries which are difficult and prone to errors. The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Downloads: 13 This Week
    Last Update:
    See Project
  • 13
    NVIDIA Container Toolkit

    NVIDIA Container Toolkit

    Build and run Docker containers leveraging NVIDIA GPUs

    The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs. Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed. The NVIDIA Container Toolkit supports different container engines...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 14
    Sunshine

    Sunshine

    Self-hosted game stream host for Moonlight

    Sunshine is an open-source self‑hosted cloud gaming server that implements NVIDIA’s GameStream protocol. Compatible with Moonlight clients across platforms, it supports low‑latency streaming via software or hardware encoding (AMD/Intel/NVIDIA) and offers a browser‑based control UI for pairing.
    Downloads: 265 This Week
    Last Update:
    See Project
  • 15
    AimAhead

    AimAhead

    The fastest AI powered Aimbot

    AimAhead is an AI-powered aim assist tool designed for high-speed target acquisition. It captures the screen, processes the image through a selected AI model to detect enemies, and then aims towards them. Optimized for NVIDIA graphics cards, AimAhead converts ONNX models to TensorRT engine files for enhanced performance, achieving between 100 to 200 cycles per second depending on the model used.
    Downloads: 200 This Week
    Last Update:
    See Project
  • 16
    XMRig

    XMRig

    RandomX, KawPow, CryptoNight, AstroBWT and GhostRider unified miner

    High performance, open-source, cross-platform RandomX, KawPow, CryptoNight, and AstroBWT CPU/GPU miner, RandomX benchmark, and stratum proxy. XMRig is a high-performance, open-source, cross-platform RandomX, KawPow, CryptoNight, and AstroBWT unified CPU/GPU miner and RandomX benchmark. Official binaries are available for Windows, Linux, macOS, and FreeBSD. The preferred way to configure the miner is the JSON config file as it is more flexible and human-friendly. The command-line interface...
    Downloads: 76 This Week
    Last Update:
    See Project
  • 17
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ... where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 36 This Week
    Last Update:
    See Project
  • 18
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    ... supervised fine-tuning and reinforcement learning to fully realize its capabilities. Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 34 This Week
    Last Update:
    See Project
  • 19
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 20
    CV-CUDA

    CV-CUDA

    CV-CUDA™ is an open-source, GPU accelerated library

    CV-CUDA is an open-source project that enables building efficient cloud-scale Artificial Intelligence (AI) imaging and computer vision (CV) applications. It uses graphics processing unit (GPU) acceleration to help developers build highly efficient pre- and post-processing pipelines. CV-CUDA originated as a collaborative effort between NVIDIA and ByteDance.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 21
    Zenith

    Zenith

    Sort of like top or htop but with zoom-able charts, CPU, GPU

    In terminal graphical metrics for your *nix system written in Rust. The make file provides for building fully static versions on Linux against the musl C library. It requires musl-gcc to be installed on the system. Install "musl-tools" package on debian/ubuntu derivatives, "musl-gcc" on fedora and equivalent on other distributions from their standard repos. If one needs to build with NVIDIA support in a virtual environment, then it requires some more setup since typically the VM software...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 22
    InvokeAI

    InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models

    .... InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products. This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.
    Downloads: 27 This Week
    Last Update:
    See Project
  • 23
    CuPy

    CuPy

    A NumPy-compatible array library accelerated by CUDA

    CuPy is an open source implementation of NumPy-compatible multi-dimensional array accelerated with NVIDIA CUDA. It consists of cupy.ndarray, a core multi-dimensional array class and many functions on it. CuPy offers GPU accelerated computing with Python, using CUDA-related libraries to fully utilize the GPU architecture. According to benchmarks, it can even speed up some operations by more than 100X. CuPy is highly compatible with NumPy, serving as a drop-in replacement in most cases...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 24
    Whishper

    Whishper

    Transcribe any audio to text, translate and edit subtitles 100% locall

    Open-source, local-first audio transcription and subtitling suite with a simple web UI. Thanks to open-source technologies, Whishper can run 100% offline. Your data never leaves your computer. Whishper allows you to translate your transcriptions to and from more than 60 languages thanks to Argos Translate and LibreTranslate. Download the transcriptions in many formats (json, txt, vtt, srt). Easily edit your subtitles right in the Web-UI.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 25
    Jellyfin Android TV

    Jellyfin Android TV

    Android TV Client for Jellyfin

    Jellyfin Android TV is a Jellyfin client for Android TV, Nvidia Shield, and Amazon Fire TV devices. We welcome all contributions and pull requests! If you have a larger feature in mind please open an issue so we can discuss the implementation before you start. Jellyfin is the volunteer-built media solution that puts you in control of your media. Stream to any device from your own server, with no strings attached. Your media, your server, your way. Jellyfin enables you to collect, manage...
    Downloads: 22 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next