|
13 | 13 | "id": "6bea4d77", |
14 | 14 | "metadata": {}, |
15 | 15 | "source": [ |
16 | | - "TwelveLabs is a leading provider of multimodal AI models specializing in video understanding and analysis. TwelveLabs' advanced models enable sophisticated video search, analysis, and content generation capabilities through state-of-the-art computer vision and natural language processing technologies. [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) now offers two TwelveLabs models: [TwelveLabs Pegasus 1.2](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-pegasus.html), which provides comprehensive video understanding and analysis, and [TwelveLabs Marengo Embed 2.7](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-marengo.html), which generates high-quality embeddings for video, text, audio, and image content. These models empower developers to build applications that can intelligently process, analyze, and derive insights from video data at scale.\n", |
| 16 | + "TwelveLabs is a leading provider of multimodal AI models specializing in video understanding and analysis. TwelveLabs' advanced models enable sophisticated video search, analysis, and content generation capabilities through state-of-the-art computer vision and natural language processing technologies. [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) now offers two TwelveLabs models: [TwelveLabs Pegasus 1.2](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-pegasus.html), which provides comprehensive video understanding and analysis, and [TwelveLabs Marengo Embed 3.0](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-marengo-3.html), which generates high-quality embeddings for video, text, audio, and image content. These models empower developers to build applications that can intelligently process, analyze, and derive insights from video data at scale.\n", |
17 | 17 | "\n", |
18 | 18 | "In this notebook, we'll be using TwelveLabs Marengo model for generating embeddings for content in texts, images and videos to enable multimodal search and analysis capabilities across different media types. " |
19 | 19 | ] |
|
106 | 106 | "metadata": {}, |
107 | 107 | "outputs": [], |
108 | 108 | "source": [ |
109 | | - "%uv pip install -r requirements.txt" |
| 109 | + "!uv pip install -r requirements.txt" |
110 | 110 | ] |
111 | 111 | }, |
112 | 112 | { |
|
317 | 317 | "bedrock_client = boto3.client(\"bedrock-runtime\")\n", |
318 | 318 | "s3_client = boto3.client(\"s3\")\n", |
319 | 319 | "aws_account_id = boto3.client('sts').get_caller_identity()[\"Account\"]\n", |
320 | | - "model_id = \"twelvelabs.marengo-embed-2-7-v1:0\"\n", |
321 | | - "cris_model_id = \"us.twelvelabs.marengo-embed-2-7-v1:0\"\n", |
| 320 | + "model_id = \"twelvelabs.marengo-embed-3-0-v1:0\"\n", |
| 321 | + "cris_model_id = \"us.twelvelabs.marengo-embed-3-0-v1:0\"\n", |
322 | 322 | "s3_bucket_name = '<an S3 bucket for storing the outputs>'\n", |
323 | 323 | "\n", |
324 | 324 | "bedrock_twelvelabs_helper = BedrockTwelvelabsHelper(bedrock_client=bedrock_client, \n", |
|
359 | 359 | "## Download a Sample Video and Upload to S3 as Input\n", |
360 | 360 | "We'll use the TwelveLabs Marengo model to generate embeddings from this video and perform content-based search.\n", |
361 | 361 | "\n", |
362 | | - "\n", |
| 362 | + "\n", |
363 | 363 | "We will use an open-source sample video, [Meridian](https://en.wikipedia.org/wiki/Meridian_(film)), as input to generate embeddings." |
364 | 364 | ] |
365 | 365 | }, |
|
421 | 421 | "id": "6e9914e4", |
422 | 422 | "metadata": {}, |
423 | 423 | "source": [ |
424 | | - "#### Marengo Embed 2.7 on Bedrock\n", |
| 424 | + "#### Marengo Embed 3.0 on Bedrock\n", |
425 | 425 | "\n", |
426 | 426 | "A multimodal embedding model that generates high-quality vector representations of video, text, audio, and image content for similarity search, clustering, and other machine learning tasks. The model supports multiple input modalities and provides specialized embeddings optimized for different use cases.\n", |
427 | 427 | "\n", |
428 | 428 | "The model supports asynchronous inference through the [StartAsyncInvoke API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_StartAsyncInvoke.html).\n", |
429 | 429 | "- Provider — TwelveLabs\n", |
430 | 430 | "- Categories — Embeddings, multimodal\n", |
431 | | - "- Model ID — `twelvelabs.marengo-embed-2-7-v1:0`\n", |
| 431 | + "- Model ID — `us.twelvelabs.marengo-embed-3-0-v1:0`\n", |
432 | 432 | "- Input modality — Video, Text, Audio, Image\n", |
433 | 433 | "- Output modality — Embeddings\n", |
434 | 434 | "- Max video size — 2 hours long video (< 2GB file size)\n", |
435 | 435 | "\n", |
436 | 436 | "**Resources:**\n", |
437 | | - "- [AWS Docs: TwelveLabs Marengo Embed 2.7](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-marengo.html)\n", |
| 437 | + "- [AWS Docs: TwelveLabs Marengo Embed 3.0](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-marengo-3.html)\n", |
438 | 438 | "- [TwelveLabs Docs: Marengo](https://docs.twelvelabs.io/v1.3/docs/concepts/models/marengo)\n" |
439 | 439 | ] |
440 | 440 | }, |
|
491 | 491 | "print(f\"✅ Video embedding created successfully with {len(video_embedding_data)} segment(s)\")" |
492 | 492 | ] |
493 | 493 | }, |
| 494 | + { |
| 495 | + "cell_type": "markdown", |
| 496 | + "id": "beeb5d2b", |
| 497 | + "metadata": {}, |
| 498 | + "source": [ |
| 499 | + "Prints the video embedding for reference" |
| 500 | + ] |
| 501 | + }, |
494 | 502 | { |
495 | 503 | "cell_type": "code", |
496 | 504 | "execution_count": null, |
497 | 505 | "id": "7864241b", |
498 | 506 | "metadata": {}, |
499 | 507 | "outputs": [], |
500 | 508 | "source": [ |
501 | | - "[x for x in video_embedding_data if x[\"embeddingOption\"] == \"visual-image\"][0]" |
| 509 | + "[x for x in video_embedding_data if x[\"embeddingOption\"] == \"visual\"][0]" |
502 | 510 | ] |
503 | 511 | }, |
504 | 512 | { |
|
583 | 591 | "metadata": {}, |
584 | 592 | "outputs": [], |
585 | 593 | "source": [ |
586 | | - "text_query = \"a person smoking in a room\"\n", |
| 594 | + "text_query = \"A person smoking in a room\"\n", |
587 | 595 | "text_search_results = bedrock_twelvelabs_helper.search_videos_by_text(text_query, top_k=3)\n" |
588 | 596 | ] |
589 | 597 | }, |
|
776 | 784 | ], |
777 | 785 | "metadata": { |
778 | 786 | "kernelspec": { |
779 | | - "display_name": "aws3", |
| 787 | + "display_name": ".venv", |
780 | 788 | "language": "python", |
781 | 789 | "name": "python3" |
782 | 790 | }, |
|
0 commit comments