[go: up one dir, main page]

  • Enterprise-Grade Monitoring - Zero Compromises Icon
    Enterprise-Grade Monitoring - Zero Compromises

    PRTG delivers deep visibility and proactive alerts for complex IT. Monitor, analyze, and optimize - all in one platform.

    Managing a large, distributed IT environment demands more than basic monitoring. PRTG provides a unified view of your entire infrastructure - across sites, clouds, and hybrid setups. Advanced analytics, customizable dashboards, and granular access controls empower your team to detect issues early and respond fast. Automate reporting, ensure compliance, and scale effortlessly as your network grows. With PRTG, you get reliability, flexibility, and the insights you need to keep your business running at peak performance.
    Start Your Free PRTG Trial
  • The All-in-One Commerce Platform for Businesses - Shopify Icon
    The All-in-One Commerce Platform for Businesses - Shopify

    Shopify offers plans for anyone that wants to sell products online and build an ecommerce store, small to mid-sized businesses as well as enterprise

    Shopify is a leading all-in-one commerce platform that enables businesses to start, build, and grow their online and physical stores. It offers tools to create customized websites, manage inventory, process payments, and sell across multiple channels including online, in-person, wholesale, and global markets. The platform includes integrated marketing tools, analytics, and customer engagement features to help merchants reach and retain customers. Shopify supports thousands of third-party apps and offers developer-friendly APIs for custom solutions. With world-class checkout technology, Shopify powers over 150 million high-intent shoppers worldwide. Its reliable, scalable infrastructure ensures fast performance and seamless operations at any business size.
    Learn More
  • 1
    ChatGPT UI

    ChatGPT UI

    A ChatGPT web client that supports multiple users, and databases

    A ChatGPT web client that supports multiple users, multiple database connections for persistent data storage, supports i18n. Provides Docker images and quick deployment scripts. Support gpt-4 model. You can select the model in the "Model Parameters" of the front-end. The GPT-4 model requires whitelist access from OpenAI. Added web search capability to generate more relevant and up-to-date answers from ChatGPT! This feature is off by default, you can turn it on in `Chat->Settings` in the admin panel, there is a record `open_web_search` in Settings, set its value to True. Add "open_registration" setting option in the admin panel to control whether user registration is enabled. You can log in to the admin panel and find this setting option under Chat->Setting. The default value of this setting is True (allow user registration). If you do not need it, please change it to False.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    DALL-E 2 - Pytorch

    DALL-E 2 - Pytorch

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch. The main novelty seems to be an extra layer of indirection with the prior network (whether it is an autoregressive transformer or a diffusion network), which predicts an image embedding based on the text embedding from CLIP. Specifically, this repository will only build out the diffusion prior network, as it is the best performing variant (but which incidentally involves a causal transformer as the denoising network) To train DALLE-2 is a 3 step process, with the training of CLIP being the most important. To train CLIP, you can either use x-clip package, or join the LAION discord, where a lot of replication efforts are already underway. Then, you will need to train the decoder, which learns to generate images based on the image embedding coming from the trained CLIP.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    Dickinson

    Dickinson

    Text generation language

    Dickinson is a text-generation language. You can try out the language on the web without installing anything. Binaries for some platforms are available on the releases page. There is an install script that will try to download the right release for your computer.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    Edward

    Edward

    A probabilistic programming language in TensorFlow

    A library for probabilistic modeling, inference, and criticism. Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields, Bayesian statistics and machine learning, deep learning, and probabilistic programming. Edward is built on TensorFlow. It enables features such as computational graphs, distributed training, CPU/GPU integration, automatic differentiation, and visualization with TensorBoard. Expectation-Maximization, pseudo-marginal and ABC methods, and message passing algorithms.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Comet Backup - Fast, Secure Backup Software for MSPs Icon
    Comet Backup - Fast, Secure Backup Software for MSPs

    Fast, Secure Backup Software for Businesses and IT Providers

    Comet is a flexible backup platform, giving you total control over your backup environment and storage destinations.
    Learn More
  • 5
    GPT AI Assistant

    GPT AI Assistant

    OpenAI + LINE + Vercel = GPT AI Assistant

    GPT AI Assistant is an application that is implemented using the OpenAI API and LINE Messaging API. Through the installation process, you can start chatting with your own AI assistant using the LINE mobile app.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Generative AI JS

    Generative AI JS

    This SDK is now deprecated, use the new unified Google GenAI SDK

    deprecated-generative-ai-js is a JavaScript/TypeScript client and example suite for interacting with Gemini generative APIs in web and Node.js environments. Though marked deprecated (likely superseded by newer SDKs), the repo shows how to wrap HTTP/WS endpoints, manage streaming responses, and interoperate with browser UI or server logic. The examples include chat widgets, prompt pipelines, and generalized inference utilities. It also deals with streaming cancellation, retries, backoff logic, and message chunk assembly to help developers handle real-world use. Because it’s JavaScript, the repo supports both ESM and CommonJS contexts, making it versatile in backend and frontend setups. The deprecation label reflects that newer or official SDKs may have replaced it, but many of its patterns still serve as a useful reference to understand how streaming, chunking, and prompt logic can be implemented by hand in JS.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Generative AI Swift

    Generative AI Swift

    This SDK is now deprecated, use the unified Firebase SDK.

    deprecated-generative-ai-swift is a Swift client and example scaffold for building generative AI apps using the Gemini models. Although marked “deprecated”, the repo demonstrates how to integrate Gemini inference into iOS and macOS apps via Swift APIs, providing boilerplate for prompt dispatching, streaming responses, UI integration, and error handling. It includes a sample app that showcases a chat interface, where users send messages and receive responses streamed in real time, with UI updates as tokens arrive. The code also handles request queuing, cancellation, and retry logic, giving developers a realistic foundation rather than a minimalist “hello world.” Despite its deprecated label, the repo remains valuable for developers who want to see how a native Swift integration might be structured before migrating to newer SDKs. Maintainability is emphasized: modular layers separate networking, prompt handling, and UI logic, making adaptation easier when switching to updated APIs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    Make-A-Video - Pytorch (wip)

    Make-A-Video - Pytorch (wip)

    Implementation of Make-A-Video, new SOTA text to video generator

    Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch. They combine pseudo-3d convolutions (axial convolutions) and temporal attention and show much better temporal fusion. The pseudo-3d convolutions isn't a new concept. It has been explored before in other contexts, say for protein contact prediction as "dimensional hybrid residual networks". The gist of the paper comes down to, take a SOTA text-to-image model (here they use DALL-E2, but the same learning points would easily apply to Imagen), make a few minor modifications for attention across time and other ways to skimp on the compute cost, do frame interpolation correctly, get a great video model out. Passing in images (if one were to pretrain on images first), both temporal convolution and attention will be automatically skipped. In other words, you can use this straightforwardly in your 2d Unet and then port it over to a 3d Unet once that phase of the training is done.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 2 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Learn More
  • 10
    Node ChatGPT API

    Node ChatGPT API

    A client implementation for ChatGPT and Bing AI

    A client implementation for ChatGPT and Bing AI. Available as a Node.js module, REST API server, and CLI app. Support for the official ChatGPT model has been added! You can now use the gpt-3.5-turbo model with the official OpenAI API, using ChatGPTClient. This is the same model that ChatGPT uses, and it's the most powerful model available right now. Usage of this model is not free, however it is 10x cheaper than text-davinci-003. The default model used in ChatGPTClient is now gpt-3.5-turbo. You can still set userLabel, chatGptLabel and promptPrefix (system instructions) as usual. Support for the official ChatGPT underlying model, gpt-3.5-turbo, via OpenAI's API. Replicates chat threads from the official ChatGPT website (with conversation IDs and message IDs), with persistent conversations using Keyv. Conversations are stored in memory by default, but you can optionally install a storage adapter to persist conversations to a database.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    OpenAI DALL·E AsyncImage SwiftUI

    OpenAI DALL·E AsyncImage SwiftUI

    OpenAI swift async text to image for SwiftUI app using OpenAI

    SwiftUI views that asynchronously loads and displays an OpenAI image from open API. You just type in your idea and AI will give you an art solution. DALL-E and DALL-E 2 are deep learning models developed by OpenAI to generate digital images from natural language descriptions, called "prompts". You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC) OpenAI's text-to-image model DALL-E 2 is a recent example of diffusion models. It uses diffusion models for both the model's prior (which produces an image embedding given a text caption) and the decoder that generates the final image. In machine learning, diffusion models, also known as diffusion probabilistic models, are a class of latent variable models. They are Markov chains trained using variational inference. The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    PaddleNLP

    PaddleNLP

    Easy-to-use and powerful NLP library with Awesome model zoo

    PaddleNLP It is a natural language processing development library for flying paddles, with Easy-to-use text area API, Examples of applications for multiple scenarios, and High-performance distributed training Three major features, aimed at improving the modeling efficiency of the flying oar developer's text field, aiming to improve the developer's development efficiency in the text field, and provide rich examples of NLP applications. Provide rich industry-level pre-task capabilities Taskflow And process-wide text area API: Support for the loading of rich Chinese data sets Dataset API, can flexibly and efficiently complete data pretreatment Data API, Preset 60 + pre-training word vector Embedding API, Providing 100 + pre-training model Transformer API Wait, the efficiency of NLP task modeling can be greatly improved.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    Petals

    Petals

    Run 100B+ language models at home, BitTorrent-style

    Run 100B+ language models at home, BitTorrent‑style. Run large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning. Single-batch inference runs at ≈ 1 sec per step (token) — up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec. Beyond classic language model APIs — you can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch. You can also host BLOOMZ, a version of BLOOM fine-tuned to follow human instructions in the zero-shot regime — just replace bloom-petals with bloomz-petals. Petals runs large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Regex

    Regex

    Generate matching and non matching strings based on regex patterns

    Generate matching and non-matching strings. This is a java library that, given a regex pattern, allows to generation of matching strings. Iterate through unique matching strings. Generate not matching strings. Follow the link to Online IDE with created project: JDoodle. Enter your pattern and see the results. By design a+, a* and a{n,} patterns in regex imply an infinite number of characters should be matched. When generating data, that would mean values of infinite length might be generated. It is highly doubtful anyone would require a string of infinite length, thus I've artificially limited repetitions in such patterns to 100 symbols when generating random values. Use a{n,m} if you require some specific number of repetitions. It is suggested to avoid using such infinite patterns to generate data based on regex.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    SDGym

    SDGym

    Benchmarking synthetic data generation methods

    The Synthetic Data Gym (SDGym) is a benchmarking framework for modeling and generating synthetic data. Measure performance and memory usage across different synthetic data modeling techniques – classical statistics, deep learning and more! The SDGym library integrates with the Synthetic Data Vault ecosystem. You can use any of its synthesizers, datasets or metrics for benchmarking. You also customize the process to include your own work. Select any of the publicly available datasets from the SDV project, or input your own data. Choose from any of the SDV synthesizers and baselines. Or write your own custom machine learning model. In addition to performance and memory usage, you can also measure synthetic data quality and privacy through a variety of metrics. Install SDGym using pip or conda. We recommend using a virtual environment to avoid conflicts with other software on your device.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    Satori

    Satori

    Enlightened library to convert HTML and CSS to SVG

    Enlightened library to convert HTML and CSS to SVG. Satori supports the JSX syntax, which makes it very straightforward to use. Satori will render the element into a 600×400 SVG, and return the SVG string. Under the hood, it handles layout calculation, font, typography and more, to generate a SVG that matches the exact same HTML and CSS in a browser. Satori only accepts JSX elements that are pure and stateless. You can use a subset of HTML elements (see section below), or custom React components, but React APIs such as useState, useEffect, dangerouslySetInnerHTML are not supported. Satori supports a limited subset of HTML and CSS features, due to its special use cases. In general, only these static and visible elements and properties that are implemented. Also, Satori does not guarantee that the SVG will 100% match the browser-rendered HTML output since Satori implements its own layout engine based on the SVG 1.1 spec.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    Simple StyleGan2 for Pytorch

    Simple StyleGan2 for Pytorch

    Simplest working implementation of Stylegan2

    Simple Pytorch implementation of Stylegan2 that can be completely trained from the command-line, no coding needed. You will need a machine with a GPU and CUDA installed. You can also specify the location where intermediate results and model checkpoints should be stored. You can increase the network capacity (which defaults to 16) to improve generation results, at the cost of more memory. By default, if the training gets cut off, it will automatically resume from the last checkpointed file. Once you have finished training, you can generate images from your latest checkpoint. If a previous checkpoint contained a better generator, (which often happens as generators start degrading towards the end of training), you can load from a previous checkpoint with another flag. A technique used in both StyleGAN and BigGAN is truncating the latent values so that their values fall close to the mean. The small the truncation value, the better the samples will appear at the cost of sample variety.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    Stable Diffusion v 2.1 web UI

    Stable Diffusion v 2.1 web UI

    Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img

    Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img, in paint and upscale4x. Gradio app for Stable Diffusion 2 by Stability AI. It uses Hugging Face Diffusers implementation. Currently supported pipelines are text-to-image, image-to-image, inpainting, upscaling and depth-to-image.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    Synthetic Data Vault (SDV)

    Synthetic Data Vault (SDV)

    Synthetic Data Generation for tabular, relational and time series data

    The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset. Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent software systems without the risk of exposure that comes with data disclosure. Underneath the hood it uses several probabilistic graphical modeling and deep learning based techniques. To enable a variety of data storage structures, we employ unique hierarchical generative modeling and recursive sampling techniques.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    Video Diffusion - Pytorch

    Video Diffusion - Pytorch

    Implementation of Video Diffusion Models

    Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch. Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch. It uses a special space-time factored U-net, extending generation from 2D images to 3D videos. 14k for difficult moving mnist (converging much faster and better than NUWA) - wip. Any new developments for text-to-video synthesis will be centralized at Imagen-pytorch. For conditioning on text, they derived text embeddings by first passing the tokenized text through BERT-large. You can also directly pass in the descriptions of the video as strings, if you plan on using BERT-base for text conditioning. This repository also contains a handy Trainer class for training on a folder of gifs. Each gif must be of the correct dimensions image_size and num_frames.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    flat

    flat

    All-in-one image generation AI

    All-in-one image generation AI. Launch StableDiffusionWebUI with just a few clicks. No Python installation or repository cloning is required. Displays generated images in a list with information such as prompts. The image folder can be set freely.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    gpt2-client

    gpt2-client

    Easy-to-use TensorFlow Wrapper for GPT-2 117M, 345M, 774M, etc.

    GPT-2 is a Natural Language Processing model developed by OpenAI for text generation. It is the successor to the GPT (Generative Pre-trained Transformer) model trained on 40GB of text from the internet. It features a Transformer model that was brought to light by the Attention Is All You Need paper in 2017. The model has 4 versions - 124M, 345M, 774M, and 1558M - that differ in terms of the amount of training data fed to it and the number of parameters they contain. Finally, gpt2-client is a wrapper around the original gpt-2 repository that features the same functionality but with more accessiblity, comprehensibility, and utilty. You can play around with all four GPT-2 models in less than five lines of code. Install client via pip. The generation options are highly flexible. You can mix and match based on what kind of text you need generated, be it multiple chunks or one at a time with prompts.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    pwa-asset-generator

    pwa-asset-generator

    Automates PWA asset generation and image declaration

    Automates PWA asset generation and image declaration. Automatically generates icon and splash screen images, favicons and mstile images. Updates manifest.json and index.html files with the generated images according to Web App Manifest specs and Apple Human Interface guidelines. When you build a PWA with a goal of providing native-like experiences on multiple platforms and stores, you need to meet with the criteria of those platforms and stores with your PWA assets; icon sizes and splash screens. Google's Android platform respects Web App Manifest API specs, and it expects you to provide at least 2 icon sizes in your manifest file. Apple's iOS currently doesn't support Web App Manifest API specs. You need to introduce custom HTML tags to set icons and splash screens to your PWA. You need to introduce a special html link tag with rel apple-touch-icon to provide icons for your PWA when it's added to home screen.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    terminalGPT

    terminalGPT

    Get GPT like ChatGPT on your terminal

    Get GPT like ChatGPT on your terminal Note: This doesn't use OpenAI ChatGPT, it uses text-davinci-003 model (by default) You'll need to have your own OpenAi apikey to operate this package. 1. Go to https://beta.openai.com 2. Select you profile menu and go to View API Keys 3. Select + Create new secret key 4. Copy generated key Get started: Using tgpt: npm -g install terminalgpt or yarn global add terminalgpt Run tgpt chat ps.: If it is your first time running it, it will ask for open AI key , paste generated key from pre-requisite steps
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    texturize

    texturize

    Generate photo-realistic textures based on source images

    Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture. A command-line tool and Python library to automatically generate new textures similar to a source image or photograph. It's useful in the context of computer graphics if you want to make variations on a theme or expand the size of an existing texture. This software is powered by deep learning technology, using a combination of convolution networks and example-based optimization to synthesize images. We're building texturize as the highest-quality open source library available! The examples are available as notebooks, and you can run them directly in-browser thanks to Jupyter and Google Colab.
    Downloads: 2 This Week
    Last Update:
    See Project