- Mar 7: Ohio North Database Training
- Mar 8: Azure Cleveland
- Mar 16: Great Lakes User Group for .Net (GLUG.Net)
- Mar 16: Chicago .Net
- Mar 23: Cleveland C#
- Mar 10: Global AI Bootcamp NYC
- May 11-13: Global Azure 2023
Use Azure Advisor to identify idle virtual machines (VMs), ExpressRoute circuits, and other unused resources.
Find resources not fully utilized with Azure Advisor. It will also provide recommendations on reconfiguring resources to reduce cost.
Use reservation pricing to pre-pay for resources for a 1 or 3 year term. This could result in a discount of up to 72% over pay-as-you-go pricing on Azure services.
Have existing on-premises software licenses? Use them with Azure for no additional cost and select Azure services for free.
Avoid the high cost of hardware investment used only during peak times of the year. Instead, dynamically allocate and de-allocate resources to match your business needs when you need it.
Use Microsoft Cost Management to create and manage budgets of Azure services used, and monitor your organization’s cloud spending.
Azure offers a variety of resources, with varying degrees of management. Depending on the level of resource management you choose, the monthly spend can be reduced significantly. Use the following diagrams to identify the various resources and management needs.
Diagram Reference: https://medium.com/chenjd-xyz/azure-fundamental-iaas-paas-saas-973e0c406de7
Speech services is one of the main categories of Azure Cognitive Services. Using this service, developers can utilize one of the four API’s to perform the following:
In a previous post, I wrote a tutorial on converting Speech-to-Text. For this post, I will go in the opposite direction and provide step-by-step directions to covert text-to-speech.
How to Use Text-to-Speech
The class “Program” contains 2 methods: Main() & CheckResult().
Breaking Down Main()
Looking at Main(), the first task is to obtain the Subscription key and region. These 2 values are obtained from Step 3 above, and are tied to the Azure subscription. They can be used by anyone who obtains them. For this reason, these items are stored in .Config file and not made available to the reader.
The next task is to instantiate SpeechConfig class using the subscription and region. The purpose of this class to contain all configuration for the Speech Service.
In addition to subscription and region, line 14 specifies the voice to be used for the speech synthesis. Azure Speech Service offers a variety of voices and supported languages, and can be found here.
After the speechConfig class is configured, it’s passed to the constructor of the SpeechSynthesizer class. As the name suggests, this class contains methods for synthesizing, or converting, text to speech. It utilizes all configuration settings specified in the prior steps.
In lines 17-26, an infinite while loop cycles through asking the user for text input, asynchronously calling the SpeakTextAsync() in the speechSynthesizer class.
speechSynthesizer.SpeakTextAsync() is an async method that takes the text string as an input parameter, sends it to Azure Speech Services to be synthesized as the desired spoken language, then plays the synthesized speech on the default speaker.
Breaking Down CheckResult()
This method is used for error checking the synthesis results and handling it accordingly. CheckResult() examines the results returned in the Reason property of the speechSynthesisResult instance. If synthesis completed successfully, it simply echoes the text entered. Otherwise if an error occurred, it will display the messages stored in the ErrorCode and ErrorDetails properties of the SpeechSynthesisCancellationDetails class.
A complete demo of Text-to-Speech service can be found in this segment.
Why use Text-to-Speech
The first reaction most developers have once they hear the results is “how cool is this?”. It’s certainly cool, but the benefits of this feature extend beyond a novelty. Applications can now verbally communicate results with visually impaired users, a segment of the user population that is often overlooked. Another benefit of verbal communication is allowing all users to hear results while doing other tasks instead of having to focus on a screen to read results. Text-to-Speech is 1 of 4 services in the Speech category of Cognitive Services.
A video presentation and demo on “Overview of Speech Services” discusses all the services in more detail. The corresponding code used for the video presentation and this article can be found at https://github.com/SamNasr/AzureSpeechServices.
This post was featured as part of the C# Advent 2022, an annual blogging event hosted by Matthew D. Groves.
During the last meeting of the .NET Study Group on Azure Virtual Networks, a couple questions came up that needed further explanation. I thought it would be best to post the and share them
Question: Do I need a NSG or Subnet if a VM is in VNet?
Answer: Yes, it’s a best practice. By default, services outside the VNet cannot connect to services within the VNet.
However, you can configure the network to allow access to the external service. Therefore, assume you have a VNet that contains both web servers and DB servers. You can have VNET configured for public access to allow outside users to access the web servers. You would also need a subnet or NSG to prevent that same public traffic from accessing the DB servers in the same VNet.
Question: Can you provide a sample diagram of the Azure Infrastructure and how VNets would be implemented?
Answer: See below for the Sample Azure Infrastructure Diagram: (https://www.dotnettricks.com/learn/azure/what-is-microsoft-azure-virtual-network-and-architecture)
Question: Where can I find a list of Frequently asked questions on Azure VNets?
Answer: For additional reading on Azure VNet, see the FAQ page at https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-faq
Virtual User Group Meetings for Mar/Apr ‘22
Mar 24: “Predicting Flights with Azure Databricks” - https://www.meetup.com/NWVDNUG/events/284074689/
Mar 29: “Machine Learning in .Net” - https://www.meetup.com/Edmonton-NET-User-Group/events/284389088/
Apr 4: Ohio North Database Training- https://www.meetup.com/ohio-north-database-training/events/
Apr 13: Azure Cleveland - https://www.meetup.com/Azure-Cleveland-Meetup/
Apr 21: GLUG.Net - https://www.meetup.com/GLUGnet/events/
Apr 28: Cleveland C#/VB.Net User Group - https://www.meetup.com/Cleveland-C-VB-Net-User-Group
Virtual User Group Meetings
Microsoft’s Azure offers a great deal of features and capabilities. One of them is Cognitive Services, where users can access a variety of APIs to help mimic a human response. Some features include converting text to spoken speech, speech to text, and even the equivalent human understanding of a spoken phrase. These services are divided into 4 major categories, as seen below. Please note “Computer Vision” and “Custom Vision” sound very similar but their capabilities are different, as outlined in the “Vision” section below.
Note: Model can be exported for use: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model
Speech-to-text is one of the many cognitive services offered by Azure. All Azure cognitive services can be classified into one of four primary categories:
Each category contains a variety of services, with Speech-to-text categorized in the “Speech” category. It converts spoken language to text using either an SDK or web API call. Both methods will require a subscription key, obtained by a brief resource setup in the Azure portal. Speech-to-text can be configured to recognize a variety of languages as displayed at https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support.
In addition, the source of the speech could be either live spoken words, or a recording.
How to Use Speech-to-text
Project Setup (using SDK)
Why you should use Speech-to-text?
Accessibility! Most applications depend on users not being vision impaired. However, this prevents a significant number of users from using an application due to their impaired vision. Certainly screen readers have been available in the Windows OS for nearly 2 decades. This allows any user to understand what is displayed on the screen. However, an impaired user will have difficulty interacting with the user interface (i.e. submitting info, filling forms, etc.). Thanks to Speech-to-text, users can now speak to the application and have the words dynamically translated to text in the application. This makes the application accessible to more users, as well as ADA and WCAG compliant.
This post was featured as part of the C# Advent 2021
This post was featured as part of the C# Advent 2021, an annual blogging event hosted by Matthew D. Groves.