Search
Understand everything.
Get lightning-fast, context-aware results that pinpoint the exact moment you need - beyond tags, and into a whole new dimension of multimodal understanding.
Search 01
Search by text or image
Use natural language queries or images to quickly uncover semantically-related moments, even across petabytes of data.
Search 01
Search by text or image
Use natural language queries or images to quickly uncover semantically-related moments, even across petabytes of data.
Search 02
Search across modalities
Sound, speech, text and visuals — make all the information in your video reachable, and find insights previously hidden.
Search 02
Search across modalities
Sound, speech, text and visuals — make all the information in your video reachable, and find insights previously hidden.
Search 03
Search in your domain language
Fine-tune our foundation models to the nuances of your business, and let your team search in language that comes naturally.
Search 03
Search in your domain language
Fine-tune our foundation models to the nuances of your business, and let your team search in language that comes naturally.
Identify the most impactful scenes to power production workflows.
Search for specific moments within footage or across vast video archives.
Let customers easily find any video moment within your platform.
Comb through petabytes of data using natural language queries.
Pinpoint precise moments to conduct investigations and manage evidence.
Let users search in their language, with over 100 languages supported.
Find and match ads to relevant media moments for increased engagement.
Fine-tune our models so your team can search in the language of their sector.
Identify the most impactful scenes to power production workflows.
Search for specific moments within footage or across vast video archives.
Let customers easily find any video moment within your platform.
Comb through petabytes of data using natural language queries.
Pinpoint precise moments to conduct investigations and manage evidence.
Let users search in their language, with over 100 languages supported.
Find and match ads to relevant media moments for increased engagement.
Fine-tune our models so your team can search in the language of their sector.
Identify the most impactful scenes to power production workflows.
Search for specific moments within footage or across vast video archives.
Let customers easily find any video moment within your platform.
Comb through petabytes of data using natural language queries.
Pinpoint precise moments to conduct investigations and manage evidence.
Let users search in their language, with over 100 languages supported.
Find and match ads to relevant media moments for increased engagement.
Fine-tune our models so your team can search in the language of their sector.
Python
Python
Python
Olympic Video Classification Application
The Olympics Video Clips Classification Application is a powerful tool designed to categorize various Olympic sports using video clips.
Try this sample app
Try this sample app
Try this sample app
Node
Node
Node
Shade Finder App: Pinpoint Specific Colors in Videos
Whether you're looking to find the perfect berry-toned lipstick or just curious about spotting specific colors in your videos, this guide will help you leverage cutting-edge AI to do so effortlessly.
Try this sample app
Try this sample app
Try this sample app
PYTHON
PYTHON
Video Highlight Generator
The YouTube Chapter Highlight Generator is a tool developed to automatically generate chapter timestamps for YouTube videos.
Try this sample app
Try this sample app
Python
Node
from twelvelabs import TwelveLabs import os client = TwelveLabs("<YOUR_API_KEY>") # Create new Index index = client.index.create( name="My First Index", engines=[ { "name": "marengo2.7", "options": ["visual", "audio"], }, ], ) # Create new Task on Index (Upload the video) video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>") task = client.task.create(index_id=index.id, file=video_path, language="en") # Wait for indexing to finish task.wait_for_done() # Search from your index query = "An artist climbing up the ladder that he painted." result = client.search.query(index.id, query, ["visual", "audio"]) print(result)
Python
Node
from twelvelabs import TwelveLabs import os client = TwelveLabs("<YOUR_API_KEY>") # Create new Index index = client.index.create( name="My First Index", engines=[ { "name": "marengo2.7", "options": ["visual", "audio"], }, ], ) # Create new Task on Index (Upload the video) video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>") task = client.task.create(index_id=index.id, file=video_path, language="en") # Wait for indexing to finish task.wait_for_done() # Search from your index query = "An artist climbing up the ladder that he painted." result = client.search.query(index.id, query, ["visual", "audio"]) print(result)
Python
Node
from twelvelabs import TwelveLabs import os client = TwelveLabs("<YOUR_API_KEY>") # Create new Index index = client.index.create( name="My First Index", engines=[ { "name": "marengo2.7", "options": ["visual", "audio"], }, ], ) # Create new Task on Index (Upload the video) video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>") task = client.task.create(index_id=index.id, file=video_path, language="en") # Wait for indexing to finish task.wait_for_done() # Search from your index query = "An artist climbing up the ladder that he painted." result = client.search.query(index.id, query, ["visual", "audio"]) print(result)
Deploy your custom-trained model on any cloud. See and surface everything in your video, then go beyond with AI that can realize your most game-changing ideas.
Try out TwelveLabs's search tools on your own videos to see them in a whole new way.
Try out TwelveLabs's search tools on your own videos to see them in a whole new way.
© 2021
-
2025
TwelveLabs, Inc. All Rights Reserved
© 2021
-
2025
TwelveLabs, Inc. All Rights Reserved
© 2021
-
2025
TwelveLabs, Inc. All Rights Reserved