Integrate Make with AssemblyAI
Make (formerly Integromat) is a workflow automation tool that lets you integrate various services together without requiring coding knowledge.
With the AssemblyAI app for Make, you can use our AI models to process audio data by transcribing it with speech recognition models, analyzing it with audio intelligence models, and building generative features on top of it with LLMs. You can supply audio to the AssemblyAI app and connect the output of our models to other services in your Make scenarios.
Create or edit a scenario in Make. Add a new module, search for AssemblyAI, and select the module that you want to use.
Create a new connection or select an existing one. In AssemblyAI API Key, enter the API key from your AssemblyAI dashboard, and click Save.
The AssemblyAI app for Make provides the following modules:
Upload an audio file to AssemblyAI so you can transcribe it.
You can pass the Upload URL
output field to the Audio URL
input field of Transcribe an Audio File module.
Transcribe an audio file and wait until the transcript has completed or failed.
Configure the Audio URL
field with the URL of the audio file you want to transcribe.
The Audio URL
must be accessible by AssemblyAI’s servers.
If you don’t have a publicly accessible URL, you can use the Upload a File module to upload the audio file to AssemblyAI.
If you don’t want to wait until the transcript is ready, change the Wait until Transcript is Ready
parameter to No
under Show advanced settings.
Configure your desired Audio Intelligence models when you create the transcript. The results of the models will be included in the transcript output.
Wait for an existing transcript to be ready. This module will complete when the status of the transcript changes to “completed” or “error”.
Create a webhook URL to receive a notification when a transcript is ready. When the transcript is ready, the webhook will be invoked with the transcript status and ID. The status will be “completed” or “error”.
Retrieve a transcript by ID.
Retrieve the paragraphs of a transcript.
Retrieve the sentences of a transcript.
Create SRT or VTT subtitles for a transcript.
First, you need to configure PII audio redaction using these fields when you create the transcript:
Redact PII
:Yes
Redact PII Audio
:Yes
Redact PII Policies
: Configure at least one PII policy
Then, you can use this module to retrieve the redacted audio of the transcript.
Search for words in a transcript.
Paginate over all transcripts.
Delete a transcript by ID. Deleting a transcript does not delete the transcript resource itself, but removes the data from the resource and marks it as deleted.
You can only invoke this module after the transcript status is “completed” or “error”.
Prompt different LLMs over your audio data using LeMUR.
You have to configure either the Transcript IDs
or Input Text
input field.
Delete the data for a previously submitted LeMUR request. Response data from the LLM, as well as any context provided in the original request will be removed.
Make your own REST API HTTP requests to the AssemblyAI API using your existing connection.
You can learn more about using Make with AssemblyAI in these resources: