|
15 | 15 | "\n",
|
16 | 16 | "Amazon Rekognition Video inappropriate or offensive content detection in stored videos is an asynchronous operation. To start detecting inappropriate or offensive content, call [StartContentModeration](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartContentModeration.html). Amazon Rekognition Video publishes the completion status of the video analysis to an Amazon Simple Notification Service topic. If the video analysis is successful, call [GetContentModeration](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetContentModeration.html) to get the analysis results. For more information about starting video analysis and getting the results, see [Calling Amazon Rekognition Video operations](https://docs.aws.amazon.com/rekognition/latest/dg/api-video.html). \n",
|
17 | 17 | "\n",
|
18 |
| - "This tutorial will show you how to use Amazon Rekognition Moderation API to moderate stored videos, how to extract text from a video using Amazon Rekognition for Text Moderation, and how to transcribe the audio of the video into text for Text Moderation." |
| 18 | + "This tutorial will show you how to use Amazon Rekognition Moderation API to moderate stored videos, how to extract text from a video using Amazon Rekognition for Text Moderation, and how to transcribe the audio of the video into text for Text Moderation. This lab will not cover the text moderation logic. You can refer to labs in the **04-text-moderation** module for more examples." |
19 | 19 | ]
|
20 | 20 | },
|
21 | 21 | {
|
|
26 | 26 | "\n",
|
27 | 27 | "- [Step 1: Setup Notebook](#step1)\n",
|
28 | 28 | "- [Step 2: Moderate video using Rekognition video moderation API](#step2)\n",
|
29 |
| - "- [Step 3: Moderate text in video](#step3)\n", |
30 |
| - "- [Step 4: Moderate audio in video](#step4)\n", |
| 29 | + "- [Step 3: Detect text in video](#step3)\n", |
| 30 | + "- [Step 4: Transcribe audio in video](#step4)\n", |
31 | 31 | "- [Step 5: Clean up](#step5)"
|
32 | 32 | ]
|
33 | 33 | },
|
|
79 | 79 | "\n",
|
80 | 80 | "s3=boto3.client('s3', region_name=region)\n",
|
81 | 81 | "rekognition=boto3.client('rekognition', region_name=region)\n",
|
82 |
| - "sagemaker = boto3.client('sagemaker', region_name=region)\n", |
83 | 82 | "comprehend = boto3.client('comprehend', region_name=region)\n",
|
84 | 83 | "transcribe=boto3.client('transcribe', region_name=region)"
|
85 | 84 | ]
|
|
176 | 175 | "for label in getContentModeration[\"ModerationLabels\"]:\n",
|
177 | 176 | " if len(label[\"ModerationLabel\"][\"ParentName\"]) > 0:\n",
|
178 | 177 | " label_html += f'''<a onclick=\"document.getElementById('cccvid1').currentTime={round(label['Timestamp']/1000)}\">[{label['Timestamp']} ms]: \n",
|
179 |
| - " {label['ModerationLabel']['Name']}, confidence: {round(label['ModerationLabel']['Confidence'])}</a><br/>\n", |
| 178 | + " {label['ModerationLabel']['Name']}, confidence: {round(label['ModerationLabel']['Confidence'],2)}%</a><br/>\n", |
180 | 179 | " '''\n",
|
181 | 180 | "display(HTML(video_tag))\n",
|
182 | 181 | "display(HTML(label_html))"
|
|
186 | 185 | "cell_type": "markdown",
|
187 | 186 | "metadata": {},
|
188 | 187 | "source": [
|
189 |
| - "# Step 3: Moderate text in video <a id=\"step3\"></a>" |
| 188 | + "# Step 3: Extract text in video <a id=\"step3\"></a>" |
190 | 189 | ]
|
191 | 190 | },
|
192 | 191 | {
|
|
247 | 246 | "for txt in getTextDetection[\"TextDetections\"]:\n",
|
248 | 247 | " if txt[\"TextDetection\"][\"Type\"] == 'LINE':\n",
|
249 | 248 | " text_html += f'''<a onclick=\"document.getElementById('cccvid2').currentTime={round(txt['Timestamp']/1000)}\">[{txt['Timestamp']} ms]: \n",
|
250 |
| - " {txt[\"TextDetection\"][\"DetectedText\"]}, confidence: {round(txt[\"TextDetection\"][\"Confidence\"],2)}</a><br/>\n", |
| 249 | + " {txt[\"TextDetection\"][\"DetectedText\"]}, confidence: {round(txt[\"TextDetection\"][\"Confidence\"],2)}%</a><br/>\n", |
251 | 250 | " '''\n",
|
252 | 251 | "display(HTML(video_tag))\n",
|
253 | 252 | "display(HTML(text_html))"
|
|
264 | 263 | "cell_type": "markdown",
|
265 | 264 | "metadata": {},
|
266 | 265 | "source": [
|
267 |
| - "# Step 4: Moderate audio in video <a id=\"step4\"></a>\n", |
| 266 | + "# Step 4: Transcribe audio in video <a id=\"step4\"></a>\n", |
268 | 267 | "\n",
|
269 | 268 | "We have moderated the video and extracted text from the video using Rekognition. Now, we will transcribe the audio to text using Amazon Transcribe.\n",
|
270 | 269 | "\n",
|
|
277 | 276 | "metadata": {},
|
278 | 277 | "outputs": [],
|
279 | 278 | "source": [
|
280 |
| - "job_name = 'video_moderation_job1'\n", |
| 279 | + "import uuid\n", |
| 280 | + "job_name = f'video_moderation_{str(uuid.uuid1())[0:4]}'\n", |
281 | 281 | "\n",
|
282 | 282 | "transcribe.start_transcription_job(\n",
|
283 | 283 | " TranscriptionJobName = job_name,\n",
|
|
296 | 296 | " time.sleep(5)\n",
|
297 | 297 | " print('.', end='')\n",
|
298 | 298 | "\n",
|
299 |
| - " getTranscription = rekognition.get_transcription_job(TranscriptionJobName = job_name)\n", |
| 299 | + " getTranscription = transcribe.get_transcription_job(TranscriptionJobName = job_name)\n", |
300 | 300 | "\n",
|
301 | 301 | "display(getTranscription['TranscriptionJob']['TranscriptionJobStatus'])"
|
302 | 302 | ]
|
|
361 | 361 | "metadata": {
|
362 | 362 | "instance_type": "ml.t3.medium",
|
363 | 363 | "kernelspec": {
|
364 |
| - "display_name": "Python 3.10.2 64-bit", |
| 364 | + "display_name": "Python 3 (Data Science)", |
365 | 365 | "language": "python",
|
366 | 366 | "name": "python3"
|
367 | 367 | },
|
|
375 | 375 | "name": "python",
|
376 | 376 | "nbconvert_exporter": "python",
|
377 | 377 | "pygments_lexer": "ipython3",
|
378 |
| - "version": "3.10.2" |
| 378 | + "version": "3.7.10" |
379 | 379 | },
|
380 | 380 | "vscode": {
|
381 | 381 | "interpreter": {
|
|
0 commit comments