.Make certain being compatible along with numerous platforms, including.NET 6.0,. Web Structure 4.6.2, and.NET Criterion 2.0 and above.Reduce reliances to stop variation disputes and the demand for binding redirects.Recording Audio Information.Among the major functionalities of the SDK is audio transcription. Developers can record audio documents asynchronously or even in real-time. Below is an instance of exactly how to transcribe an audio documents:.utilizing AssemblyAI.making use of AssemblyAI.Transcripts.var customer = brand-new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood files, similar code may be used to achieve transcription.await using var flow = brand new FileStream("./ nbc.mp3", FileMode.Open).var records = await client.Transcripts.TranscribeAsync(.flow,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK also supports real-time audio transcription utilizing Streaming Speech-to-Text. This function is especially valuable for requests calling for prompt processing of audio data.utilizing AssemblyAI.Realtime.await utilizing var transcriber = new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for acquiring audio coming from a mic for instance.GetAudio( async (part) => await transcriber.SendAudioAsync( part)).await transcriber.CloseAsync().Using LeMUR for LLM Applications.The SDK incorporates along with LeMUR to enable developers to develop large language style (LLM) apps on voice information. Here is an example:.var lemurTaskParams = brand-new LemurTaskParams.Cue="Offer a brief summary of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Cleverness Models.Furthermore, the SDK features integrated help for audio cleverness models, allowing feeling study as well as other enhanced functions.var transcript = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more details, explore the formal AssemblyAI blog.Image source: Shutterstock.