In the latest episode of Front Lines Category Visionary podcast with Brett Stapper, our CEO and co-founder, Jae Lee, shares how TwelveLabs built the video AI infrastructure that others said would take too long to be worth it. While the AI world chased quick demos and wrapped models, our team spent years building proprietary, purpose-built video foundation models and indexing infrastructure, from scratch. The result? ⚙️ Infrastructure capable of indexing millions of hours of video in days 🤝 Deep adoption across media, entertainment, sports, and federal Jae dives into what it takes to stay focused when shortcuts are tempting: 🔹 Hiring for excellence under pressure 🔹 Building for production, not demos 🔹 Structuring GTM for credibility in complex industries It’s a story of conviction, discipline, and building for the world that comes after the hype. 🎧 Listen to the full episode: https://lnkd.in/egWJNMNW #TwelveLabs #VideoAI
TwelveLabs
Software Development
San Francisco, California 15,138 followers
Building the world's most powerful video understanding platform.
About us
The world's most powerful video intelligence platform for enterprises.
- Website
-
http://www.twelvelabs.io
External link for TwelveLabs
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2021
Locations
-
Primary
55 Green St
San Francisco, California 94111, US
Employees at TwelveLabs
Updates
-
Every major leap in video technology—from silent films to sound, black and white to color, film to digital—didn't just change how we made stories. It changed what stories we could tell. Today, we are standing at another threshold, and this time the revolution is in understanding. ♾️ At TwelveLabs, we have been thinking deeply about how video understanding AI fundamentally transforms creative possibilities in media and entertainment. Our latest article explores how this technology enables entirely new narrative formats that were previously impossible—not because creators lacked imagination, but because the mechanical burden of video comprehension created insurmountable barriers. 🗝️ It dives into five major opportunities: multi-perspective narratives that adapt to viewer choice, micro-pattern discovery that reveals why content truly resonates, ambient production that finds authentic stories in continuous capture, semantic editorial that understands visual storytelling beyond dialogue, and infinite versioning that maintains narrative integrity across markets and preferences. 🏮 What excites us most is how this shifts the creative equation. When AI handles the overwhelming mechanical work of video comprehension, human creativity becomes more valuable, not less. The differentiator isn't finding footage—it's knowing what's worth looking for. Template-based AI tools force content into predetermined boxes. Video understanding reveals what's already there, waiting to be discovered. 🔦 At TwelveLabs, we are not building tools that tell stories. We are building infrastructure that empowers storytellers to discover stories that were always there, waiting for the right technology to make them visible. Our Pegasus model and Marengo embeddings are designed specifically to capture the fingerprints of content—not just objects and actions, but the interplay between visual elements, audio cues, and temporal dynamics that create meaning. 🎠 The framework is changing. The question for media and entertainment professionals is: what stories will you discover that couldn't be told before? Read the full article from Ryan Khurana to explore these ideas in depth. Link in comment ⬇️
-
-
-
-
-
+2
-
-
In the 97th session of #MultimodalWeekly, we feature three projects built with the TwelveLabs API from the recent HackRice hackathon (all from The University of Texas at Dallas students) ⬇️ ✅ Md Ahnaf Al Zabir, Vladislav Kondratyev, Nikhil Marisetty, and Adam Absa will present HootHive - an AI-powered lecture summarizer that processes video content using Pegasus: https://lnkd.in/gBuE6h2h It's built with Streamlit and allows users to upload lectures, generate chapter timestamps, create summaries/notes, and perform Q&A searches within video content. It features practical capabilities, including Discord bot integration and automated study material generation. ✅ Roman Hauksson, Victor Sim, and Le Duy Pham will present TouchGrass - a burnout prevention app built with React (Vite) frontend and Convex backend: https://lnkd.in/gSKvMjJY It tracks developer wellbeing through multiple data sources, including webcam mood detection, GitHub commits, Linear project management, and Wakatime coding activity to calculate burnout risk scores. It utilizes Pegasus for sophisticated mood analysis from webcam videos and calculates the burnout risk. ✅ Sahas Sharma, Sai Chauhan, and Sunay Shehaan will present Doculabubu - an AI-powered telehealth assistant that helps patients remember and understand their doctor visits through intelligent voice queries and video analysis: https://lnkd.in/gybh6yvd It processes Zoom telehealth recordings with both Marengo & Pegasus and allows patients to ask questions about their visits using voice queries, returning timestamped video clips with answers. Register for the webinar here: https://lnkd.in/gJGtscSH ⬅️ Join our Discord community to connect with the speakers: https://lnkd.in/gDvse-ii 🤝
-
-
What happens when you give 100 brilliant people 24 hours to solve advertising's biggest challenges? This weekend, we found out. ⬇️ Our Generative AI in Advertising Hackathon just wrapped at the stunning betaworks office in Manhattan's Meatpacking District, and honestly – we are still processing what we witnessed. Engineers sitting next to brand marketers. Students collaborating with agency veterans. Founders pairing up with AdTech professionals. All united by a simple question: what if video advertising technology could actually understand what it was looking at? 🤔 Here's what they built: 18 production-ready solutions tackling the problems that cost the industry billions. Not demos. Not proofs of concept. Actual tools that marketing leaders could deploy tomorrow. 🧺 However, what really struck us was that every single project utilized video understanding in ways we hadn't anticipated. Teams found applications we never imagined when we built the APIs. That's the thing about putting powerful technology in the hands of people who deeply understand domain problems – they see possibilities you missed. ❇️ Massive gratitude to our partners who made this possible: ✔️ New Enterprise Associates (NEA) brought their extensive advertising network and strategic guidance ✔️ Amazon Web Services (AWS) provided the infrastructure backbone and Bedrock integration that let teams scale without friction ✔️ Swayable shared their expertise in measuring creative impact ✔️ ElevenLabs enabled multimodal solutions that combined video and voice intelligence And to the participants who spent their weekend with us: 🤝 You could have been anywhere. You chose to spend 24 hours tackling hard problems at the intersection of AI and advertising. You formed teams with strangers, learned new APIs on the fly, and built things that genuinely matter to an industry desperate for innovation. Several of you are already in conversations with potential customers. A few projects might become actual startups. All of you pushed the boundaries of what's possible when video AI meets advertising expertise. The advertising industry is at an inflection point. Video dominates digital content, but most ad tech still treats it like a mystery box. This weekend proved that it doesn't have to be true anymore. When you can truly understand video content – its context, emotion, narrative, and meaning – you can build advertising technology that's smarter, safer, and more effective. 📈 To everyone who participated, partnered, mentored, and believed in this vision – thank you for an unforgettable weekend. Let's keep building. ↗️
-
-
-
-
-
+15
-
-
🚀 Introducing our latest demo application: Video Deep Research – a solution designed to enhance the extraction of insights from video content. At TwelveLabs, we are committed to advancing video understanding. This tutorial presents a seamless workflow with Perplexity Sonar, enabling detailed research capabilities directly from video assets. The objective is to facilitate the use of video as a citable and verifiable research resource. 📰 Traditionally, the process of extracting structured insights and validating information from video content has been a manual and resource-intensive endeavor. Our new Video Deep Research application (built by Hrishikesh Yadav), developed with TwelveLabs Analyze (Pegasus-1.2) for dynamic video analysis and Sonar by Perplexity for citation-powered knowledge retrieval, offers a methodological enhancement. 🖇️ Our comprehensive tutorial details the implementation steps, including the connection of your TwelveLabs API key to access and manage indexed videos or to upload new content for processing. The core workflow is structured for efficiency and depth, encompassing: 1 - TwelveLabs Client Configuration 2 - Index and Video Retrieval 3 - Intelligent Video Analysis 4 - Deep Research with Perplexity Sonar 5 - Robust Video Upload & Indexing 6 - Real-time Frontend Updates This Video Deep Research workflow addresses an identified need in web search and research, providing developers with tools to construct applications that derive verifiable intelligence from video. For businesses, this translates to more efficient insight generation, improved content verification, and expanded applications for video assets in strategic decision-making and content development. 🪙 To explore these video research capabilities, access our full tutorial and review the demo application in the comments. ⬇️
-
This week, our co-founder Soyoung Lee took the stage at the Forbes Under 30 Summit. 🎤 Soyoung gave a live demo of TwelveLabs, showing how our video-native AI makes video as searchable and understandable as text. Instead of sifting through hours of footage manually, our models help studios, creators, and enterprises instantly: 🔎 Find the exact scene they need, 📝 Generate contextual metadata, and ⚡️ Move from raw footage to finished story with speed and precision. In a conversation with Forbes reporter Zoya Hasan, Soyoung also reflected on our journey so far: 💡 The early days of raising our first round of funding. 🤝 Landing our very first customer. 📚 And how our executive team met as teenagers. Thank you, Forbes, for having us! #VideoAI #TwelveLabs #Under30Summit
-
-
Exciting news from the entertainment world 🎬✨ We’re thrilled to share that Squid Game creator Hwang Dong-hyuk’s Firstman Studio has made an investment in TwelveLabs to accelerate the future of entertainment production. As Hwang shared: “Storytelling is becoming more global, more visual, and faster-paced. The creators who can adapt will shape the future of entertainment. I believe technology like TwelveLabs will be essential for turning ideas into finished stories at the speed audiences now expect.” At TwelveLabs, our mission is to index and understand video as easily as text, unlocking billions of dollars of untapped footage and giving filmmakers, studios, and creators the ability to search, discover, and repurpose content with unprecedented speed and precision, all while keeping creative control in human hands. We’re honored to partner with Firstman Studio and continue building tools that give storytellers more time for the art, emotion, and magic only they can create. 📖 Read the full Variety exclusive here: https://lnkd.in/gBxdE8zv #TwelveLabs #VideoAI
-
This past weekend, we had the privilege of sponsoring HackGT in Atlanta alongside PrizePicks, Impiricus, Capital One, T-Mobile, Visa, Cedar, Warp, and more. 🚀 With 900+ participants and 278 projects built, the creativity and talent on display were incredible. The top 3 projects built with the TwelveLabs API explored use cases across news, healthcare, and sports: 📰 NewsCap: A fact-checking platform that combats misinformation by combining automated research tools with video content analysis. TwelveLabs’ Pegasus powers the video analysis and fact-checking capabilities. 🏥 BetterDoctor: An AI-powered healthcare platform that makes patient-doctor interactions smarter. TwelveLabs enables video indexing and search so past appointment recordings are fully searchable with accurate timestamps. 🏈 PropSage: A real-time sports prop pricing and insight platform that combines statistical priors, news evidence, and multimodal video context analysis from TwelveLabs to deliver fair line estimation and richer insights. A huge thanks to James Le, Eric Kim, and James G. for representing TwelveLabs at HackGT and supporting the teams throughout the weekend! #VideoAI #TwelveLabs #HackGT
-
-
-
-
-
+4
-
-
Video is where attention lives. But most “AI for video” still treats it like a pile of screenshots and transcripts. That misses what matters: sequence, sound, and storytelling. In a new AdExchanger content studio piece, Bobby Mohr, VP of Revenue at TwelveLabs, explains why video-native AI is ad tech’s next frontier, and why text-based LLMs and traditional computer vision simply aren’t enough. True video intelligence unlocks: 🔓 Every monetizable moment in a publisher’s catalog 📈 Scene-level context for more relevant and brand-safe ads ⚡️ Faster, smarter workflows that scale creative and yield 🙌 A better, less intrusive experience for consumers Ad tech has hit its video AI plot twist, and the pace of change is only speeding up. Read the full article here: https://lnkd.in/g6ypjkcv #TweleLabs #VideoAI
-
⏰ 5 days until the Generative AI Advertising Hackathon in NYC The lineup is incredible: ✅ 300+ participants from brands, agencies, and AdTech companies ✅ Enterprise-grade AI APIs and tools from TwelveLabs ElevenLabs Amazon Web Services (AWS) Swayable ✅ Direct showcase opportunity at Advertising Week NY ✅ Executive-level judges from the industry's biggest names Alex Sherman Michael Santana Anshuk G. Ari Paparo Michael Bishop This is not just about building cool demos - it is about creating solutions that marketing leaders can implement Monday morning. Video understanding, voice agents, multimodal analytics - all the tools you need to tackle advertising's biggest challenges. 🗝️ The best part? Your weekend hack could become the industry's next big thing. 🧲 Last chance to register: https://luma.com/g2b923qq 🖌️ See you in Manhattan! 🗽