Debug your Render services in Claude Code and Cursor.
Try Render MCPDebug your Render services in Claude Code and Cursor.
Try Render MCP@taskasync def analyze_documents(file_paths: list[str]):files = await get_files(file_paths)results = await asyncio.gather(*[summarize_with_llm(path) for path in files])return results@task(options=Options(retry=Retry(max_retries=3, factor=2)))def summarize_with_llm(path: str) -> dict:text = read_file(path)return call_llm_for_summary(text)
Avoid hitting rate limits with exponential backoff.
Restore to your last healthy state after an interruption.
Avoid duplicated work, even across retries.
Spin up hundreds or even thousands of
workers when your queue spikes.
Go beyond 15-minute serverless limits—tasks
can stay active for a day or more.
Workers automatically spin down when there’s
nothing to work on. Only pay for what you use.
Start running with just a few lines of code—
no heavy frameworks or steep learning curve.
Iterate on your machine, then scale on ours.
View per-task logs, retries, and timelines.
@taskasync def query_llms_and_evaluate(prompt: str) -> str:# Query 3 LLMs in parallel using model namesmodel_configs = [{"provider": "openai", "model": "gpt-5"},{"provider": "anthropic", "model": "claude-opus-4"},{"provider": "google", "model": "gemini-2.5-pro"},]responses = await asyncio.gather(*[query_llm(cfg["provider"], cfg["model"], prompt) for cfg in model_configs])# Have a 4th LLM evaluate and select the best responsereturn await select_best_result(responses)@task(options=Options(retry=Retry(max_retries=3, wait_duration_ms=5000, factor=2)))async def query_llm(provider: str, model_name: str, prompt: str) -> str:providers = {"openai": ChatOpenAI,"anthropic": ChatAnthropic,"google": ChatGoogleGenerativeAI,}# Construct the appropriate LLM client based on providerllm = providers[provider](model=model_name)response = await llm.ainvoke([HumanMessage(content=prompt)])return response.content@task(options=Options(retry=Retry(max_retries=2, wait_duration_ms=1000, factor=1.5)))async def select_best_result(responses: list[str]) -> str:evaluator = ChatOpenAI(model="gpt-5")eval_prompt = ("Which response is best?+ "1}: {r}" for i, r in enumerate(responses)))evaluation = await evaluator.ainvoke([HumanMessage(content=eval_prompt)])return evaluation.content