All Projects → Azure-Samples → Cognitive-Speech-STT-Android

Azure-Samples / Cognitive-Speech-STT-Android

Licence: Unknown, Unknown licenses found Licenses found Unknown LICENSE.md Unknown LICENSE-IMAGE.md
Android SDK for the Microsoft Speech-to-Text API, part of Cognitive Services

September 2018: New Microsoft Cognitive Services Speech SDK available

We released a new Speech SDK supporting the new Speech Service. The new Speech SDK comes with support for Windows, Android, Linux, Javascript and iOS.

Please check out Microsoft Cognitive Services Speech SDK for documentation, links to the download pages, and the samples.

NOTE: The content of this repository is supporting the Bing Speech Service, not the new Speech Service. Bing Speech Service has been deprecated, please use the new Speech Service.

Microsoft Speech API: Android Speech-to-Text Client Library and Samples

This repo contains the Android client library and samples for Speech-to-Text in Microsoft Speech API, an offering within Microsoft Cognitive Services on Azure, formerly known as Project Oxford.

The Client Library

The Speech To Text client library is a client library for Microsoft Speech, Speech-to-text API.

The easiest way to consume the client library is to add the com.microsoft.projectoxford:speechrecognition package from Maven Central Repository. To find the latest version of client library, go to http://search.maven.org, and search for "g:com.microsoft.projectoxford".

To add the client library dependency from build.gradle file, add the following line in dependencies.

dependencies {
    //
    // Use the following line to include client library from Maven Central Repository
    // Change the version number from the search.maven.org result
    //
    compile 'com.microsoft.projectoxford:speechrecognition:1.2.2'

    // Your other Dependencies...
}

To add the client library dependency from Android Studio:

  1. From Menu, Choose File > Project Structure.
  2. Click on your app module.
  3. Click on Dependencies tab.
  4. Click "+" sign to add new dependency.
  5. Pick Library dependency from the drop-down list.
  6. Type com.microsoft.projectoxford and hit the search icon from Choose Library Dependency dialog.
  7. Pick the client library that you intend to use.
  8. Click OK to add the new dependency.
  9. Download the appropriate JNI library libandroid_platform.so from this page and put into your project's directory app/src/main/jniLibs/armeabi/ or app/src/main/jniLibs/x86/.

The Sample

This sample demonstrates the following features using a wav file or external microphone input:

  • Short-form recognition
  • Long-form dictation
  • Recognition with intent

Requirements

  • Android OS must be Android 4.1 or higher (API Level 16 or higher)
  • The speech client library contains native code. To use this sample in an emulator, make sure that your build variant matches the architecture (x86 or arm) of your emulator.

Build the sample

  1. First, you must obtain a Speech API subscription key by following the instructions on Subscriptions.

  2. Start Android Studio, choose Import project (Eclipse ADT, Gradle, etc.) from the Quick Start options and select Cognitive-Speech-STT-Android folder.

  3. When a Gradle Sync dialog pops up, choose OK to continue downloading the latest tools.

  4. In Android Studio -> Project panel -> Android view, open file "SpeechRecoExample/res/values/strings.xml", and find the line "Please_add_the_subscription_key_here;". Replace the "Please_add_the_subscription_key_here" value with your subscription key from the first step.

  5. If you want to use Recognition with intent, you also need to sign up for Language Understanding Intelligent Service (LUIS) and set the key values in luisAppID and luisSubscriptionID in Samples_SpeechRecoExample_res_values_strings.xml.

  6. In Android Studio, select menu Build > Make Project to build the sample, and Run to launch this sample app.

Running the sample

In Android Studio, select menu Run, and Run app to launch this sample app.

  1. In the application, press the button Select Mode to select what type of speech recognition you would like to use.

  2. To start recognition, press the Start button.

Contributing

We welcome contributions. Feel free to file issues and submit pull requests on the repo and we'll try to address them as soon as possible. Learn more about how you can help on our Contribution Rules & Guidelines.

You can reach out to us anytime with questions and suggestions using our communities below:

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

License

All Microsoft Cognitive Services SDKs and samples are licensed with the MIT License. For more information, see LICENSE.

Sample images are licensed separately, please refer to LICENSE-IMAGE.

Developer Code of Conduct

Developers using Cognitive Services, including this client library & sample, are expected to follow the "Developer Code of Conduct for Microsoft Cognitive Services", found at http://go.microsoft.com/fwlink/?LinkId=698895.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].