All Projects → Picovoice → leopard

Picovoice / leopard

Licence: Apache-2.0 license
On-device speech-to-text engine powered by deep learning

Programming Languages

python
139335 projects - #7 most used programming language
java
68154 projects - #9 most used programming language
C#
18002 projects
typescript
32286 projects
rust
11053 projects
go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to leopard

demo vietasr
Vietnamese Speech Recognition
Stars: ✭ 22 (-93.79%)
Mutual labels:  speech-recognition, automatic-speech-recognition, speech-to-text, stt, asr
sova-asr
SOVA ASR (Automatic Speech Recognition)
Stars: ✭ 123 (-65.25%)
Mutual labels:  speech-recognition, automatic-speech-recognition, speech-to-text, stt, asr
kaldi-long-audio-alignment
Long audio alignment using Kaldi
Stars: ✭ 21 (-94.07%)
Mutual labels:  speech-recognition, automatic-speech-recognition, speech-to-text, transcription, asr
speech-recognition-evaluation
Evaluate results from ASR/Speech-to-Text quickly
Stars: ✭ 25 (-92.94%)
Mutual labels:  speech-recognition, speech-to-text, stt, asr
KeenASR-Android-PoC
A proof-of-concept app using KeenASR SDK on Android. WE ARE HIRING: https://keenresearch.com/careers.html
Stars: ✭ 21 (-94.07%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text, on-device
open-speech-corpora
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+137.57%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text, stt
Cheetah
On-device streaming speech-to-text engine powered by deep learning
Stars: ✭ 383 (+8.19%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text, asr
Vosk Api
Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
Stars: ✭ 1,357 (+283.33%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text, asr
spokestack-ios
Spokestack: give your iOS app a voice interface!
Stars: ✭ 27 (-92.37%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text, asr
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-85.03%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text, asr
Voice Overlay Android
🗣 An overlay that gets your user’s voice permission and input as text in a customizable UI
Stars: ✭ 189 (-46.61%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
Speech To Text Benchmark
speech to text benchmark framework
Stars: ✭ 481 (+35.88%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
Voice Overlay Ios
🗣 An overlay that gets your user’s voice permission and input as text in a customizable UI
Stars: ✭ 440 (+24.29%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
Vosk
VOSK Speech Recognition Toolkit
Stars: ✭ 182 (-48.59%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
Rhino
On-device speech-to-intent engine powered by deep learning
Stars: ✭ 406 (+14.69%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
octopus
On-device speech-to-index engine powered by deep learning.
Stars: ✭ 30 (-91.53%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
AmazonSpeechTranslator
End-to-end Solution for Speech Recognition, Text Translation, and Text-to-Speech for iOS using Amazon Translate and Amazon Polly as AWS Machine Learning managed services.
Stars: ✭ 50 (-85.88%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
Sonus
💬 /so.nus/ STT (speech to text) for Node with offline hotword detection
Stars: ✭ 532 (+50.28%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text
spokestack-android
Extensible Android mobile voice framework: wakeword, ASR, NLU, and TTS. Easily add voice to any Android app!
Stars: ✭ 52 (-85.31%)
Mutual labels:  voice-recognition, speech-recognition, asr
Nativescript Speech Recognition
💬 Speech to text, using the awesome engines readily available on the device.
Stars: ✭ 72 (-79.66%)
Mutual labels:  voice-recognition, speech-recognition, speech-to-text

Leopard

Made in Vancouver, Canada by Picovoice

Twitter URL YouTube Channel Views

Leopard is an on-device speech-to-text engine. Leopard is:

  • Private; All voice processing runs locally.
  • Accurate
  • Compact and Computationally-Efficient
  • Cross-Platform:
    • Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64)
    • Android and iOS
    • Chrome, Safari, Firefox, and Edge
    • Raspberry Pi (4, 3) and NVIDIA Jetson Nano

Table of Contents

AccessKey

AccessKey is your authentication and authorization token for deploying Picovoice SDKs, including Leopard. Anyone who is using Picovoice needs to have a valid AccessKey. You must keep your AccessKey secret. You would need internet connectivity to validate your AccessKey with Picovoice license servers even though the voice recognition is running 100% offline.

AccessKey also verifies that your usage is within the limits of your account. Everyone who signs up for Picovoice Console receives the Free Tier usage rights described here. If you wish to increase your limits, you can purchase a subscription plan.

Demos

Python Demos

Install the demo package:

pip3 install pvleoparddemo

Run the following in the terminal:

leopard_demo_file --access_key ${ACCESS_KEY} --audio_paths ${AUDIO_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

C Demo

Build the demo:

cmake -S demo/c/ -B demo/c/build && cmake --build demo/c/build

Run the demo:

./demo/c/build/leopard_demo -a ${ACCESS_KEY} -l ${LIBRARY_PATH} -m ${MODEL_PATH} ${AUDIO_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${LIBRARY_PATH} with the path to appropriate library under lib, ${MODEL_PATH} to path to default model file (or your custom one), and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

iOS Demo

To run the demo, go to demo/ios/LeopardDemo and run:

pod install

Replace let accessKey = "${YOUR_ACCESS_KEY_HERE}" in the file ViewModel.swift with your AccessKey.

Then, using Xcode, open the generated LeopardDemo.xcworkspace and run the application.

Android Demo

Using Android Studio, open demo/android/LeopardDemo as an Android project and then run the application.

Replace "${YOUR_ACCESS_KEY_HERE}" in the file MainActivity.java with your AccessKey.

Node.js Demo

Install the demo package:

yarn global add @picovoice/leopard-node-demo

Run the following in the terminal:

leopard-file-demo --access_key ${ACCESS_KEY} --input_audio_file_path ${AUDIO_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

For more information about Node.js demos go to demo/nodejs.

Flutter Demo

To run the Leopard demo on Android or iOS with Flutter, you must have the Flutter SDK installed on your system. Once installed, you can run flutter doctor to determine any other missing requirements for your relevant platform. Once your environment has been set up, launch a simulator or connect an Android/iOS device.

Before launching the app, use the copy_assets.sh script to copy the cheetah demo model file into the demo project. (NOTE: on Windows, Git Bash or another bash shell is required, or you will have to manually copy the context into the project).

Replace "${YOUR_ACCESS_KEY_HERE}" in the file main.dart with your AccessKey.

Run the following command from demo/flutter to build and deploy the demo to your device:

flutter run

Go Demo

The demo requires cgo, which on Windows may mean that you need to install a gcc compiler like Mingw to build it properly.

From demo/go run the following command from the terminal to build and run the file demo:

go run filedemo/leopard_file_demo.go -access_key "${ACCESS_KEY}" -input_audio_path "${AUDIO_PATH}"

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

For more information about Go demos go to demo/go.

React Native Demo

To run the React Native Leopard demo app you will first need to set up your React Native environment. For this, please refer to React Native's documentation. Once your environment has been set up, navigate to demo/react-native to run the following commands:

For Android:

yarn android-install    # sets up environment
yarn android-run        # builds and deploys to Android

For iOS:

yarn ios-install        # sets up environment
yarn ios-run

Java Demo

The Leopard Java demo is a command-line application that lets you choose between running Leopard on an audio file or on microphone input.

From demo/java run the following commands from the terminal to build and run the file demo:

cd demo/java
./gradlew build
cd build/libs
java -jar leopard-file-demo.jar -a ${ACCESS_KEY} -i ${AUDIO_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

For more information about Java demos go to demo/java.

.NET Demo

Leopard .NET demo is a command-line application that lets you choose between running Leopard on an audio file or on real-time microphone input.

From demo/dotnet/LeopardDemo run the following in the terminal:

dotnet run -c FileDemo.Release -- --access_key ${ACCESS_KEY} --input_audio_path ${AUDIO_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

For more information about .NET demos, go to demo/dotnet.

Rust Demo

Leopard Rust demo is a command-line application that lets you choose between running Leopard on an audio file or on real-time microphone input.

From demo/rust/filedemo run the following in the terminal:

cargo run --release -- --access_key ${ACCESS_KEY} --input_audio_path ${AUDIO_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} with a path to an audio file you wish to transcribe.

For more information about Rust demos, go to demo/rust.

Web Demo

From demo/web run the following in the terminal:

yarn
yarn start

(or)

npm install
npm run start

Open http://localhost:5000 in your browser to try the demo.

SDKs

Python

Install the Python SDK:

pip3 install pvleopard

Create an instance of the engine and transcribe an audio file:

import pvleopard

handle = pvleopard.create(access_key='${ACCESS_KEY}')

print(handle.process_file('${AUDIO_PATH}'))

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} to path an audio file. Finally, when done be sure to explicitly release the resources using handle.delete().

C

Create an instance of the engine and transcribe an audio file:

#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>

#include "pv_leopard.h"

pv_leopard_t *handle = NULL;
bool automatic_punctuation = false;
pv_status_t status = pv_leopard_init("${ACCESS_KEY}", "${MODEL_PATH}", automatic_punctuation, &handle);
if (status != PV_STATUS_SUCCESS) {
    // error handling logic
}

char *transcript = NULL;
int32_t num_words = 0;
pv_word_t *words = NULL;
status = pv_leopard_process_file(handle, "${AUDIO_PATH}", &transcript, &num_words, &words);
if (status != PV_STATUS_SUCCESS) {
    // error handling logic
}

fprintf(stdout, "%s\n", transcript);
for (int32_t i = 0; i < num_words; i++) {
    fprintf(
            stdout,
            "[%s]\t.start_sec = %.1f .end_sec = %.1f .confidence = %.2f\n",
            words[i].word,
            words[i].start_sec,
            words[i].end_sec,
            words[i].confidence);
}

free(transcript);
free(words);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${MODEL_PATH} to path to default model file (or your custom one), and ${AUDIO_PATH} to path an audio file. Finally, when done be sure to release resources acquired using pv_leopard_delete(handle).

iOS

The Leopard iOS binding is available via CocoaPods. To import it into your iOS project, add the following line to your Podfile and run pod install:

pod 'Leopard-iOS'

Create an instance of the engine and transcribe an audio_file:

import Leopard

let modelPath = Bundle(for: type(of: self)).path(
        forResource: "${MODEL_FILE}", // Name of the model file name for Leopard
        ofType: "pv")!

let leopard = Leopard(accessKey: "${ACCESS_KEY}", modelPath: modelPath)

do {
    let audioPath = Bundle(for: type(of: self)).path(forResource: "${AUDIO_FILE_NAME}", ofType: "${AUDIO_FILE_EXTENSION}")
    let result = leopard.process(audioPath)
    print(result.transcript)
} catch let error as LeopardError {
} catch { }

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${MODEL_FILE} a custom trained model from console or the default model, ${AUDIO_FILE_NAME} with the name of the audio file and ${AUDIO_FILE_EXTENSION} with the extension of the audio file.

Android

To include the package in your Android project, ensure you have included mavenCentral() in your top-level build.gradle file and then add the following to your app's build.gradle:

dependencies {
    implementation 'ai.picovoice:leopard-android:${LATEST_VERSION}'
}

Create an instance of the engine and transcribe an audio file:

import ai.picovoice.leopard.*;

final String accessKey = "${ACCESS_KEY}"; // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
final String modelPath = "${MODEL_FILE}";
try {
    Leopard handle = new Leopard.Builder()
        .setAccessKey(accessKey)
        .setModelPath(modelPath)
        .build(appContext);

    File audioFile = new File("${AUDIO_FILE_PATH}");
    LeopardTranscript transcript = handle.processFile(audioFile.getAbsolutePath());

} catch (LeopardException ex) { }

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${MODEL_FILE} with a custom trained model from console or the default model, and ${AUDIO_FILE_PATH} with the path to the audio file.

Node.js

Install the Node.js SDK:

yarn add @picovoice/leopard-node

Create instances of the Leopard class:

const Leopard = require("@picovoice/leopard-node");
const accessKey = "${ACCESS_KEY}" // Obtained from the Picovoice Console (https://console.picovoice.ai/)
let handle = new Leopard(accessKey);

const result = engineInstance.processFile('${AUDIO_PATH}');
console.log(result.transcript);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} to path an audio file.

When done, be sure to release resources using release():

handle.release();

Flutter

Add the Leopard Flutter plugin to your pub.yaml.

dependencies:
  leopard_flutter: ^<version>

Create an instance of the engine and transcribe an audio file:

import 'package:leopard/leopard.dart';

const accessKey = "{ACCESS_KEY}"  // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)

try {
    Leopard _leopard = await Leopard.create(accessKey, '{LEOPARD_MODEL_PATH}');
    LeopardTranscript result = await _leopard.processFile("${AUDIO_FILE_PATH}");
    print(result.transcript);
} on LeopardException catch (err) { }

Replace ${ACCESS_KEY} with your AccessKey obtained from Picovoice Console, ${MODEL_FILE} with a custom trained model from Picovoice Console or the default model, and ${AUDIO_FILE_PATH} with the path to the audio file.

Go

Install the Go binding:

go get github.com/Picovoice/leopard/binding/go

Create an instance of the engine and transcribe an audio file:

import . "github.com/Picovoice/leopard/binding/go"

leopard = Leopard{AccessKey: "${ACCESS_KEY}"}
err := leopard.Init()
if err != nil {
    // handle err init
}
defer leopard.Delete()

transcript, words, err := leopard.ProcessFile("${AUDIO_PATH}")
if err != nil {
    // handle process error
}

log.Println(transcript)

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} to path an audio file. Finally, when done be sure to explicitly release the resources using leopard.Delete().

React Native

The Leopard React Native binding is available via NPM. Add it via the following command:

yarn add @picovoice/leopard-react-native

Create an instance of the engine and transcribe an audio file:

import {Leopard, LeopardErrors} from '@picovoice/leopard-react-native';

const getAudioFrame = () => {
  // get audio frames
}

try {
  const leopard = await Leopard.create("${ACCESS_KEY}", "${MODEL_FILE}")
  const { transcript, words } = await leopard.processFile("${AUDIO_FILE_PATH}")
  console.log(transcript)
} catch (err: any) {
  if (err instanceof LeopardErrors) {
    // handle error
  }
}

Replace ${ACCESS_KEY} with your AccessKey obtained from Picovoice Console, ${MODEL_FILE} with a custom trained model from Picovoice Console or the default model and ${AUDIO_FILE_PATH} with the absolute path of the audio file. When done be sure to explicitly release the resources using leopard.delete().

Java

The latest Java bindings are available from the Maven Central Repository at:

ai.picovoice:leopard-java:${version}

Create an instance of the engine with the Leopard Builder class and transcribe an audio file:

import ai.picovoice.leopard.*;

final String accessKey = "${ACCESS_KEY}";

try {
    Leopard leopard = new Leopard.Builder().setAccessKey(accessKey).build();
    LeopardTranscript result = leopard.processFile("${AUDIO_PATH}");
    leopard.delete();
} catch (LeopardException ex) { }

System.out.println(transcript);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_PATH} to the path an audio file. Finally, when done be sure to explicitly release the resources using leopard.delete().

.NET

Install the .NET SDK using NuGet or the dotnet CLI:

dotnet add package Leopard

Create an instance of the engine and transcribe an audio file:

using Pv;

const string accessKey = "${ACCESS_KEY}";
const string audioPath = "/absolute/path/to/audio_file";

Leopard handle = Leopard.Create(accessKey);

Console.Write(handle.ProcessFile(audioPath));

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console. Finally, when done release the resources using handle.Dispose().

Rust

First you will need Rust and Cargo installed on your system.

To add the leopard library into your app, add pv_leopard to your app's Cargo.toml manifest:

[dependencies]
pv_leopard = "*"

Create an instance of the engine using LeopardBuilder instance and transcribe an audio file:

use leopard::LeopardBuilder;

fn main() {
    let access_key = "${ACCESS_KEY}"; // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
    let leopard: Leopard = LeopardBuilder::new().access_key(access_key).init().expect("Unable to create Leopard");
    
    if let Ok(leopard_transcript) = leopard.process_file("/absolute/path/to/audio_file") {
        println!("{}", leopard_transcript.transcript);
    }
}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console.

Web

Install the web SDK using yarn:

yarn add @picovoice/leopard-web

or using npm:

npm install --save @picovoice/leopard-web

Create an instance of the engine using LeopardWorker and transcribe an audio file:

import { Leopard } from "@picovoice/leopard-web";
import leopardParams from "${PATH_TO_BASE64_LEOPARD_PARAMS}";

function getAudioData(): Int16Array {
... // function to get audio data
  return new Int16Array();
}

const leopard = await LeopardWorker.create(
  "${ACCESS_KEY}",
  { base64: leopardParams },
);

const { transcript, words } = await leopard.process(getAudioData());
console.log(transcript);
console.log(words);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console. Finally, when done release the resources using leopard.release().

Releases

v1.1.0 — August 11th, 2022

  • added true-casing by default for transcription results
  • added option to enable automatic punctuation insertion
  • word timestamps and confidence returned as part of transcription
  • support for 3gp (AMR) and MP4/m4a (AAC) audio files
  • Leopard Web SDK release

v1.0.0 — January 10th, 2022

  • Initial release.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].