All Projects → IBM → extract-textual-insights-from-video

IBM / extract-textual-insights-from-video

Licence: Apache-2.0 license
Extract Textual insights from Video

Programming Languages

javascript
184084 projects - #8 most used programming language
HTML
75241 projects
SCSS
7915 projects
CSS
56736 projects
Handlebars
879 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to extract-textual-insights-from-video

watson-speech-translator
Use Watson Speech to Text, Language Translator, and Text to Speech in a web app with React components
Stars: ✭ 66 (+186.96%)
Mutual labels:  ibm-cloud, watson-speech-to-text
speech-to-text-code-pattern
React app using the Watson Speech to Text service to transform voice audio into written text.
Stars: ✭ 37 (+60.87%)
Mutual labels:  ibm-cloud, watson-speech-to-text
daany
Daany - .NET DAta ANalYtics .NET library with the implementation of DataFrame, Time series decompositions and Linear Algebra routines BLASS and LAPACK.
Stars: ✭ 49 (+113.04%)
Mutual labels:  series
watson-waste-sorter
Create an iOS phone application that sorts waste into three categories (landfill, recycling, compost) using a Watson Visual Recognition custom classifier
Stars: ✭ 45 (+95.65%)
Mutual labels:  ibm-cloud
Godot-Top-down-Shooter-Tutorial
This repository contains the source code for the Godot Top-down Shooter Tutorial series.
Stars: ✭ 41 (+78.26%)
Mutual labels:  series
webping.cloud
Test your network latency to the nearest cloud provider in AWS, Azure, GCP, Alibaba Cloud, IBM Cloud, Oracle Cloud and DigitalOcean directly from your browser.
Stars: ✭ 60 (+160.87%)
Mutual labels:  ibm-cloud
Discord.JS-Coding-Tutorials
🙂 A full Discord.JS v12 Guide for making Discord Bots by DashCruft on YouTube
Stars: ✭ 58 (+152.17%)
Mutual labels:  series
slack-chatbot-database-watson
Code for the solution tutorial "Build a database-driven Slackbot" (chatbot) with a custom extension in IBM Watson Assistant
Stars: ✭ 23 (+0%)
Mutual labels:  ibm-cloud
dnn-object-detection
Analyze real-time CCTV images with Convolutional Neural Networks
Stars: ✭ 93 (+304.35%)
Mutual labels:  ibm-cloud
nibbledb
a byte-sized time series database
Stars: ✭ 23 (+0%)
Mutual labels:  series
Medical-Blockchain
A healthcare data management platform built on blockchain that stores medical data off-chain
Stars: ✭ 138 (+500%)
Mutual labels:  ibm-cloud
pssa
Singular Spectrum Analysis for time series forecasting in Python
Stars: ✭ 119 (+417.39%)
Mutual labels:  series
vpc-tutorials
Companion scripts to VPC tutorials
Stars: ✭ 14 (-39.13%)
Mutual labels:  ibm-cloud
Bukkit Coding Tutorial
This is the repository for my Bukkit Coding series
Stars: ✭ 44 (+91.3%)
Mutual labels:  series
jpetstore-kubernetes
Modernize and Extend: JPetStore on IBM Cloud Kubernetes Service
Stars: ✭ 21 (-8.7%)
Mutual labels:  ibm-cloud
Java-Interview-Programs
Core Java Projects with complete source code
Stars: ✭ 48 (+108.7%)
Mutual labels:  series
Serverless-Saturdays
A repo that contains all the resources from the Serverless Saturdays series
Stars: ✭ 16 (-30.43%)
Mutual labels:  series
symfony-todo-backend
This is the result of all the videos that were created in the series that i published on the playlist. LINK BELOW
Stars: ✭ 172 (+647.83%)
Mutual labels:  series
Walt
🎬 A Swift 3 library for creating gifs and videos from a series of images
Stars: ✭ 44 (+91.3%)
Mutual labels:  series
CodeEngine
Samples for using Code Engine
Stars: ✭ 69 (+200%)
Mutual labels:  ibm-cloud

Extract insights from videos

This Code Pattern is part of the series Extracting Textual Insights from Videos with IBM Watson. Please complete the Extract audio from video, Build custom Speech to Text model with speaker diarization capabilities and Use advanced NLP and tone analysis to extract meaningful insights code patterns of the series before continuing further since all the code patterns are linked.

In a virtually connected world, staying focused towards work or education is very important. Studies suggests that most people tend to lose their focus from live virtual meetings or virtual classroom sessions post 20min, hence most of the meetings and virtual classrooms are recorded so that an individual can go through it later.

What if these recordings could be analyzed with the help of AI and a detailed report of the meeting or classroom could be generated? Towards this goal, in this code pattern, given a video recording of the virtual meeting or a virtual classroom, we will be extracting audio from video file using open source library FFMPEG, transcribing the audio to get speaker diarized notes with custom trained language and acoustic speech to text models, and generating a NLU report that consists of Category, Concepts, Emotion, Entities, Keywords, Sentiment, Top Positive Sentences and Word Clouds using Python Flask runtime.

In this code pattern, given any video, we will learn how to extract speaker diarized notes and meaningful insights report using Speech To Text, advanced NLP and Tone Analysis.

When you have completed this code pattern, you will understand how to:

  • Use Watson Speech to Text service to convert the human voice into the written word.
  • Use advanced NLP to analyze text and extract meta-data from content such as concepts, entities, keywords, categories, sentiment and emotion.
  • Leverage Tone Analyzer's cognitive linguistic analysis to identify a variety of tones at both the sentence and document level.

architecture

Flow

  1. User uploads recorded video file of the virtual meeting or a virtual classroom in the application.

  2. FFMPG Library extracts audio from the video file.

  3. Watson Speech To Text transcribes the audio to give a diarized textual output.

  4. Watson Language Translator (Optionally) translates other languages into English transcript.

  5. Watson Tone Analyzer analyses the transcript and picks up top positive statements form the transcript.

  6. Watson Natural Language Understanding reads the transcript to identify key pointers from the transcript and get the sentiments and emotions.

  7. The key pointers and summary of the video is then presented to the user in the application.

  8. The user can then download the textual insights.

Watch the Video

video

Pre-requisites

  1. IBM Cloud Account

  2. Docker

  3. Python

Steps

  1. Clone the repo

  2. Add the Credentials to the Application

  3. Deploy the Application

  4. Run the Application

1. Clone the repo

Clone the extract-textual-insights-from-video repo locally. In a terminal, run:

$ git clone https://github.com/IBM/extract-textual-insights-from-video

2. Add the Credentials to the Application

You will have to add Watson Speech-To-Text, Tone Analyzer and Natural Language Understanding Credentials to the Application.

If you have completed the first three code patterns of the series, then you can add the same credentials created in second code pattern of the series and third code pattern of the series by following the steps below.

Add existing credentials created from the series
  • In the second code pattern of the series cloned repo, you will have updated speechtotext.json file with speech to text credentials. Copy both the files and paste it in parent folder of the repo that you cloned in step 1.

  • In the third code pattern of the series cloned rep, you will have updated naturallanguageunderstanding.json file with natural language understanding credentials and toneanalyzer.json file with tone analyzer credentials. Copy that file and paste it in parent folder of the repo that you cloned in step 1.

Or if you have landed on this code pattern directly without completing the previous code patterns of the series, you can add new credentials by following the steps below.

Add new credentials

Speech-to-text-service

  • In Speech To Text Dashboard, Click on Services Credentials

  • Click on New credential and add a service credential as shown.

  • Once the credential is created, you can copy the credentials using the small two overlapping squares and paste the credentials into speechtotext.json file present in the cloned repo.

  • Back to IBM Cloud, create a Natural Language Understanding service, under Select a pricing plan select Lite and click on create as shown.

nlu-service

  • Click on New credential and add a service credential as shown.

  • Once the credential is created, you can copy the credentials using the small two overlapping squares and paste the credentials into naturallanguageunderstanding.json file present in the cloned repo.

  • Back to IBM Cloud, create a Tone Analyzer service, under Select a pricing plan select Lite and click on create as shown.

tone-service

  • In Tone Analyzer dashboard, click on Services Credentials

  • Click on New credential and add a service credential as shown.

  • Once the credential is created, you can copy the credentials using the small two overlapping squares and paste the credentials into toneanalyzer.json file present in the cloned repo.

3. Deploy the Application

With Docker Installed
  • change directory to repo parent folder :
$ cd extract-textual-insights-from-video/
  • Build the Dockerfile as follows :
$ docker image build -t extract-textual-insights-from-video .
  • once the dockerfile is built run the dockerfile as follows :
$ docker run -p 8080:8080 extract-textual-insights-from-video
Without Docker
  • Install the FFMPEG library.

For Mac users run the following command:

$ brew install ffmpeg

Other platform users can refer to the ffmpeg documentation to install the library.

  • Install the python libraries as follows:

    • change directory to repo parent folder
    $ cd extract-textual-insights-from-video/
    • use python pip to install the libraries
    $ pip install -r requirements.txt
  • Finally run the application as follows:

$ python app.py

4. Run the Application

sample-output

  • We'll begin by uploading a video from which we'll be extracting insights.

  • You can make use of any meeting video or classroom video that you have or you can download the video that we have used for the demonstration purpose.

  • This is a free educational video taken from cognitiveclasses.ai. The video is an introduction to a python course.

  • Click on the Drag and drop files here or click here to upload, choose the video file you want to extract insights from.

upload

Note: We have trained a custom language model and an acoustic model with IBM Earnings Call Q1 2019 dataset. Hence the model's performance will be best for Computer Science, Finance related Content. The model can be trained according to the content that you wish to extract. Example: Train the model with sports dataset to get best results with sports commentary.

sttoptions

  • You can find the advance NLP and Tone Analyzer options that we worked with in the Use advanced NLP and Tone Analysis to extract meaningful insights code pattern from the series. If you have landed directly on this code pattern and created credentials then you will see "Lite version". nluoptions

  • Click on the Submit button and wait for the application to process. When you have pressed submit, the application in background will:

    • Extract audio from the video.
    • Transcribe the audio to get speaker diarized notes.
    • Use advanced NLP and Tone Analysis to extract insightful report.

submit

  • As soon as the video is uploaded, you can see the video preview on the screen as shown.

processing

  • You can track the progress through the progress bar as shown.

  • The various progressing stages are:

    • Uploading
    • Extracting
    • Transcribing
    • NLU Analysing

NOTE: An approximate time to complete the extraction of insights will be displayed.

progress

  • Once the video is transcribed you can scroll down to see the speaker diarized textual output under Speech To Text tab as shown.

sttoutput

  • Similarly once the NLU Analysis is completed, you can click on the NLU Analysis tab to view the report.

nluoutput

  • More about the entities:
    • Category - Categorize your content using a five-level classification hierarchy. View the complete list of categories here.
    • Concept Tags: Identify high-level concepts that aren't necessarily directly referenced in the text.
    • Entity: Find people, places, events, and other types of entities mentioned in your content. View the complete list of entity types and subtypes here.
    • Keywords: Search your content for relevant keywords.
    • Sentiments: Analyze the sentiment toward specific target phrases and the sentiment of the document as a whole.
    • Emotions: Analyze emotion conveyed by specific target phrases or by the document as a whole.
    • Positive sentences: The Watson Tone Analyzer service uses linguistic analysis to detect emotional and language tones in written text
  • Learn more features of:
    • Watson Natural Language Understanding service. Learn more.
    • Watson Tone Analyzer service. Learn more.
  • Once the NLU Analysis Report is generated you can review it. The Report consists of:

    • Features extracted by Watson Natural Language Understanding

    • Features extracted by Watson Tone Analyzer:

    • Other features

report

  1. Category: Based on the dataset that we used, you can see that the category was extracted as technology and computing specifically Software.

Note : You can see the confidence score of the model in green bubble tags.

  1. Entity: As you can see entity is Person specifically Alex Ackles indicating that, in the video recording most of the emphisis is given by a Person, Ackles.

  2. Concept Tags: Top 3 concept tags are extracted from the video, United Nations, Aesthetics and Statistics indicating that the speaker spoke about these contexts more often.

  3. Keywords, Sentiments and Emotions: Top keywords along with their sentiments and emotions are extracted, giving a sentiment analysis of the entire meeting.

  4. Top Positive Sentences: Based on emotional tone and language tone, positive sentences spoken in the video is extracted and is limited to 5 top positive sentences.

  5. Word Clouds: Based on the keywords, Nouns & Adjectives as well as Verbs are analyzed, and the result is then turned into word clouds.

  • The Report can be printed by clicking on the print button as shown.

print

Summary

We learnt how to extract audio from video files, transcribe the audio with our custom built models, process the transcript to get speaker diarized notes as well as NLU analysis report.

License

This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.

Apache License FAQ

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].