Category Archives: Azure

Using Azure OpenAI and Logic Apps to Triage Bugs in Azure DevOps

There are other examples out there using the publicly-available OpenAI service, but what if you’re concerned about “pulling a Samsung” and inadvertently sending your company’s IP to the public service when trying to triage a bug report? A simple error message can accidentally provide context about what you’re working on, such as project names, file types, method names, and any other context you intend to use in your prompt.

OpenAI is an AI research lab and company known for their advanced language model GPT-3, while the Azure OpenAI Service is a specific offering within Microsoft Azure that integrates OpenAI’s technologies into the Azure platform, allowing developers to leverage their powerful AI models at scale. Azure OpenAI (AOAI) also helps keep your company’s data in your own Azure tenant.

In my little sample implementation, I used AOAI, a Logic App, and a small Azure Function to put together an “automatic AI bug triage” service hooked up to Azure DevOps. This approach provides a simple way to send problem information to OpenAI for the sake of solving a bug, without training the public OpenAI model.

First, get yourself access to the Azure OpenAI service by requesting it here.

Once you have your service provisioned, you’ll be ready to connect to it from your Azure Function (more on that shortly).

Azure Logic App

The scaffolding for this triage service is the Azure Logic App, which helps orchestrate the capturing of the bug submission, calling the Function, and putting the results back into the bug in Azure DevOps.

Below, you see the first steps of the Logic App. It’s triggered when a bug is created in Azure DevOps (you must specify your org and project for scope, an provide other conditions to trigger if you like), capture the value from the “Repro Steps” field. The value is then “scrubbed” to plain text.

First steps of the Azure Logic App
First steps of the Azure Logic App

Next, we call the TriageBug Azure Function and pass it the value of the ‘reprostep’s variable. After capturing the response from the Function in a variable ‘triagetext’, that value is updated into the original bug (work item). In this example, I put it in a custom “AI Triage” field.

Final steps of Azure Logic App
Final steps of Azure Logic App

Azure Function to Call Azure Open AI

Now, I could have skipped using a Function and just called the AOAI service directly via its REST endpoint, but why not modularize that a little, providing more development control over prompting (or prompt chaining) to hopefully a more accurate response from the service.

In my Function I used the Microsoft Semantic Kernel, which is a lightweight SDK which provides easy access to AOAI. An easy method to get an IKernal object:

  private static IKernel GetSemanticKernel()
  {
      var kernel = Kernel.Builder.Build();
      kernel.Config.AddAzureTextCompletionService(
          _OpenAIDeploymentName,
          _OpenAIEndpoint,
          _OpenAIKey
      );
      return kernel;
  }

You can, for example purposes, hard code the above ‘_OpenAIDeploymentName’, ‘_OpenAIEndpoint’, and ‘_OpenAIKey’ variables, but it’s recommended you pull those values from Key Vault.

I create a quick static method to call AOAI. I hard-coded the prompt to send to the service, but you could easily extend it to pull a prompt from somewhere else:

        static async Task<string> AskForHelp(string msg)
        {
            IKernel kernel = GetSemanticKernel();

            var prompt = @"As a software developer, identify potential causes, fixes, and recommendations 
for the following error message (include references as well): {{$input}}";

            var fnc = kernel.CreateSemanticFunction(prompt);

            var result = await fnc.InvokeAsync(msg);
            return result.ToString();

        }

The Function trigger was only a small edit from the base code created when using an HTTP trigger:

public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    ILogger log)
{
    log.LogInformation("TriageBug function processed a request.");

    string message = req.Query["message"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    message ??= data?.message;

    string responseMessage = await AskForHelp(message);

    return new OkObjectResult(responseMessage.Trim());
}

If I have the Logic App enabled and file a bug in ADO, the Logic App runs and I see something like this:

AI triage of bug in ADO
AI triage of bug in ADO

So there you go!

Is it perfect? Nope. There are plenty of things you can adjust/add here to make this more robust, such as:

  • Use more AI to “learn” what best prompt to use (based on extracted text) to get more accurate results.
  • Return the triage text as HTML to enable hyperlinks and better formatting.
  • Uh.. maybe a little error handling? 😀
  • Use more values from ADO (additional bug field values, project name, description, etc) to provide better context for the prompt.

I’m sure there are more. Hopefully this gets you started!

Webinar: Optimize Costs with Application Modernization

Today more than ever, Healthcare companies are looking to lower cost and speed up innovation through the lens of their applications.  Microsoft has created a team to help support this complex and time-consuming process of analyzing ALL the components of the application AND the hardware it’s running on, to provide a comprehensive recommendation and plan for a single application to an entire company’s portfolio of applications.

Join us in this webinar to learn about:​

  • State of Healthcare application landscape
  • Application focused assessment that helps deliver financial, and a technical business case for each application
  • Programs and resources available to help accelerate modernization
  • Real-world success stories leveraging the Application Modernization Assessment

How to engage directly with a Microsoft Application Innovation Specialist

WHEN: Wed, 12/14, 2022, 9:30AM PST

Register here

Stretching your Dev/Test Dollar in Azure

Visual Studio Subscriptions, Dev/Test Subscriptions & More

You’ve got needs. Dev/Test needs. And access to an infinite world of power with Azure. How can you leverage that cloudly frontier for non-production work, i.e. Dev/Test?

There are typically two main scenarios when talking about Dev/Test in the cloud:

  • You want a sandbox to play in that won’t cost much or impact others. A place that you can use to try new things, spin up new services, and learn.
  • You need a shared sandbox for collaborative work (but not production), for trying new concepts or architectures, or simply don’t want to pay production prices for non-production environments.

In the basic sense, there are two options to best leverage your dollars in Azure for Dev/Test: Visual Studio Subscriptions (if you own them) and their Azure credit benefit, and the Enterprise Dev/Test Subscriptions (in Azure, not to be confused with a VS Subscription). Let’s take a walk through both approaches.

Visual Studio Subscriptions – Azure Credits Benefit

Your Visual Studio Subscription comes with monthly credits in Azure to use against your own, individual, Azure Subscription. Depending on your flavor of Visual Studio, you get different amounts of credits ($50/$100/$150), which reset monthly (this means that if you go over your credit limit, your resources simply shut off until your billing period restarts.

Monthly Azure credits as part of your Visual Studio Subscription benefit

If you haven’t already done so, you can activate your monthly credit benefit at https://my.visualstudio.com/. (see below)

https://my.visualstudio.com/

This monthly benefit gives you access to most Azure services for learning and speculative individual development, without bothering anyone.

  • Individual skills development
  • Experimenting in Azure
  • Light individual dev/test workloads
  • Prototyping, POC’s, “throw-away work”

That said, some organizations I’ve worked with in the past don’t like/allow these Visual Studio-based Azure subscriptions as they don’t initially have as much control. So, it’s best to check with your IT team, and your Visual Studio Subscriptions administrator (how to find/contact).

Enterprise Dev/Test Subscription (Azure)

If you have an Enterprise Agreement with Microsoft, the Enterprise Dev/Test Subscription may be a great option for you if you have Visual Studio Subscribers that want/need to collaborate in Azure in a non-prod environment.

At a high-level, the Enterprise Dev/Test Offer is:

  • For non-production workloads in Azure
  • Special, lower rates on Windows VMs, cloud services, SQL, HDInsight, App Service, Logic Apps. (Additional savings with Reservations)
  • Access to Dev/Test images (incl. Win 8.1/10)
  • Centralized management via Azure Enterprise Portal
  • Use funds on Enterprise Agreement/MACC (or PAYGO use Dev/Test Offer)

That said, it’s important to make a distinction between the Enterprise Dev/Test Azure subscription and a standard Enterprise Azure subscription:

Enterprise Dev/TestStandard Enterprise
Restricted to dev/test usage only
Can only be used by active VS subscribers
Includes dev/test images (Win 8.1 & 10)
Not covered by SLAs
Any/all workloads
Covered by SLAs

Main differences between Enterprise Dev/Test & standard Enterprise

Critical points for a Dev/Test subscription:

  • There is no financially-backed SLA
  • Any and all users of a Dev/Test subscription MUST have Visual Studio Subscriptions.

Sound like a good option? Your Azure Account Owner (w/ permission to do so from an Enterprise Administrator) can create a Dev/Test subscription for you.

Where do each fit in?

Let’s build this out visually.

Individual developer, learning, prototyping? Use the Azure credits as part of your Visual Studio Subscription.

Need to collaborate with your broader team in a non-prod environment? Go the Enterprise Dev/Test route.

Production workloads? Use your standard Enterprise Subscription (don’t forget your AHUB benefits to minimize costs!).

For further reading:

Managing PII & PHI in Azure DevOps

Summary

As teams expand and deepen usage of Azure DevOps, there is the propensity for Personally Identifiable Information (PII) being introduced into work items (user stories, bugs, test cases, etc.). This can introduce liability and privacy issues for organizations. PII can creep in into Azure DevOps environments in the form of (may not be all inclusive):

  • Field values in user stories, bugs, tasks, etc. (description, acceptance criteria, title, and other HTML or text fields)
  • Test cases (title, description, test step descriptions, test step attachments)
  • Attachments to any work item type

While there is not an off-the-shelf solution to help with this There are ways to leverage Azure to develop such a utility to find, manage, and notify appropriately when PII exists in the Azure DevOps organization.

Approaches to Scanning

There are two main approaches to this problem. First, the team needs to find any PII that already exists in Azure DevOps and take necessary action. Second, the team needs a method to detect PII as close to real-time when it is first introduced.

Full Scan of Azure DevOps

To “catch up” and detect with PII that already exists in Azure DevOps, a comprehensive scan of Azure DevOps content is needed.

Below is a very high-level pseudo-code outline of performing such a scan. This scan takes into consideration all the aforementioned areas that PII could be present in Azure Boards or Azure Test Plans (the components Azure DevOps that leverage work items).

I also built a sample (just a sample, only a sample) here in GitHub.

Connect to Azure DevOps organization
Foreach (project in organization)
{
	Foreach (workItemType in project)
{
	Get all work items for the current workItemType
	Foreach (workItem in workItemsByType)
{
	Get all HTML and text field values that could contain PII
	Send HTML/text field values to Azure Text Analytics (in batch document)
	Foreach (valueWithPII in TextAnalyticsResults)
{
	Take some action (notification, redaction, removal)
}
Get attachments for the workItem
Foreach (attachment in workItemAttachments)
{
	Send attachment content (supported format) to Azure Computer Vision
	Send computerVisionResults to Azure Text Analytics
	Foreach (attachmentWithPII in AttachmentAnalyticsResults)
	{
		Take some action (notification, removal)
	}
}
If (workItemType is a Test Case)
{
	Get all values of each test step
Send test step values to Azure Text Analytics (in batch document)
Foreach (testStepWithPII in TestStepAnalyticsResults)
{
	Take some action (notification, redaction, removal)
}
Foreach (attachment in TestSteps)
{
	Send attachment content (supported format) to Azure Computer Vision
	Send computerVisionResults to Azure Text Analytics
	Foreach (attachmentWithPII in AttachmentAnalyticsResults)
	{
		Take some action (notification, removal)
	}

}
Get any test case parameters
Send test case parameters to Azure Text Analytics (in batch document)
Foreach (paramWithPII in TestParametersAnalyticsResults)
{
	Take some action (notification, removal)
}
}
}
}
}

This solution could also be used for a periodic scan if real-time/triggered scans are prohibitive.

Reference Documentation

Incremental or Triggered Scan

Moving forward, teams will need to detect the introduction of PII into Azure DevOps as soon as possible. There are a couple of approaches to this more incremental or trigger-based scan.

First, the solution developed in “Full Scan of Azure DevOps” could be utilized here as well, parameterized to check only the most recent items for a given interval. For example, if the scan is to run every hour, filter work item querying to return only items with a ChangedDate in the last 60 minutes.

Second, Azure Logic Apps could be used to trigger when work items are updated in Azure DevOps, providing detection within 1 minute of PII introduction. The Logic App would orchestrate the extraction of content to check, as well as any mitigation actions.

Below are a couple screenshots of basic examples of using a Logic App (steps are simplified for brevity).

Detect and Notify   This first example simply checks the description field of a work item for PII, using two Azure Functions (SanitizeString to remove HTML markup, and CheckPHIWorkItem to receive the text to check, call Azure Text Analytics, and format a response).   It is triggered by an update to a work item. When complete, it sends an email to the person who updated the work item.
Detect, Mitigate, and Notify   This second example avoids using Azure Functions for Text Analytics by using the built-in connector for Text Analytics.   It is triggered by an update to a work item. If the work item was updated by a human user, the Text Analytics connector is called. If PII is detected, the PII is copied to a file in Azure Blob Storage (where more RBAC controls are present), and either the redacted text or a violation notice is updated back into the work item.   When complete, it sends an email to the person who updated the work item.  

While there are Logic App connectors for Azure DevOps, Text Analytics, and Computer Vision, Azure Functions would provide more granular control (and also become more of a microservices architecture). Create Azure Functions to:

  • “Sanitize” HTML field values to plain text
  • Manage collation and interaction with Azure Text Analytics for text values
  • Manage OCR actions using Azure Computer Vision to extract text values from images and other attachments
  • Conduct PII replacement, redaction, or removal
  • Facilitate logging (to Azure storage, databases, or Azure Event Hubs)

Lastly, the “Full Scan” solution could be combined with the Azure Functions/microservices-style architecture to create more reusable components, allowing for easier updates, fixes, and scale. For example, create Functions for each of the above-bulleted capabilities, and leverage those Functions from the “Full Scan” solution as well as the “Incremental Scan” solution.

Azure Services Used

Below are Azure services that could potentially be used for this solution and are referenced in this document.

  • Azure Cognitive Services: Azure Cognitive Services are cloud-based services with REST APIs and client library SDKs available to help you build cognitive intelligence into your applications. You can add cognitive features to your applications without having artificial intelligence (AI) or data science skills. Azure Cognitive Services comprise various AI services that enable you to build cognitive solutions that can see, hear, speak, understand, and even make decisions.
    • Text Analytics: The Text Analytics API is a cloud-based service that provides Natural Language Processing (NLP) features for text mining and text analysis, including: sentiment analysis, opinion mining, key phrase extraction, language detection, and named entity recognition.
    • Computer Vision: The cloud-based Computer Vision API provides developers with access to advanced algorithms for processing images and returning information. By uploading an image or specifying an image URL, Microsoft Computer Vision algorithms can analyze visual content in different ways based on inputs and user choices. Learn how to analyze visual content in different ways with quickstarts, tutorials, and samples.
  • Azure Logic Apps: Azure Logic Apps is a cloud-based platform for creating and running automated workflows that integrate your apps, data, services, and systems.
  • Azure Functions: Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. You focus on the pieces of code that matter most to you, and Azure Functions handles the rest.
  • Azure Blob Storage: Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data.
  • Azure Event Hubs: Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.

Development Considerations

While the conceptual approach to scanning Azure DevOps is straightforward, there are programming considerations to discuss if wanting to complete a comprehensive scan of Azure DevOps work items. These are a few that I discovered during my research.

  • Dealing with attachments: Checking for PII in an attachment requires additional steps, depending on the file format.
    • Text-based (.txt, .json, .xml, .html, etc.): These can have their content streamed to memory as text. The Text Analytics API can then be streamed the content.
    • Binary (.jpg, .doc, .pdf, .png, etc.): If the format is a supported one for the Read API, the URL of the attachment can be provided to the Read API directly (if the identity used to run the Cognitive Services resource has access to Azure DevOps) using the attachment URL. Otherwise, these attachments will need to be downloaded as well. Depending on the file type, additional methods will need to be used to get the content into an accepted file format for the OCR features in Azure Computer Vision (using the Read API).
      • The Read API has the following input requirements:
        • Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
        • For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
        • The file size must be less than 50 MB (6 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
  • As documented in the pseudo-code for the full scan approach, an additional check and loop is needed to iterate test steps in a Test Case. For any attachments on a test step, the above “dealing with attachments” considerations also apply.
  • An individual Logic Apps can only be triggered by changes to a single project (the trigger action can be bound to only one Azure DevOps project).
  • Work Items: API limits work item query results to 200.
    • May need to build narrower queries, such as iteratively “walk back” by querying for items in last 1 day, then 2 days, etc. (for example)
    • OData feeds support more than 200 results, but don’t include text-based fields. Additional calls would have to be incorporated.

Actions Upon Detection

Regardless of the approach used to detect PII, the actions taken upon detection are most important. What to do depends on urgency, compliance, and trust.

Logging

Simply logging the detection may be good enough if proper reporting is all that is needed. Sending the detection event to Azure Event Hubs or Azure Event Grid provides an avenue for the event to be recorded in an analytics workspace, or analysis engine.

Notification

Notification can involve several methods:

  • Email to the user introducing the PII, that person’s manager, or a compliance team.
  • Post a message to Microsoft Teams.
  • Place a tag on the work item to draw attention to it.

Mitigation

Mitigation involves taking direct action on the PII content. During this exercise, several options presented themselves. For example, if the following text was detected in the description field of a work item:

Parker Doe has repaid all of their loans as of 2020-04-25. Their SSN is 859-98-0987. To contact them, use their phone number 800-102-1100.

  • PII deletion: Delete the PII content and save the work item.
  • PII redaction: The content can be replaced with its redacted equivalent (Azure Text Analytics provides redaction automatically): ********** has repaid all of their loans as of **********. Their SSN is ***********. To contact them, use their phone number ************.
  • Secure the PII: Move the PII content to a location that has proper RBAC, such as Azure Blob Storage. Replace the description field with a notice of the PII detection and the blob URL. Only those with RBAC access will be able to view it.

Associated Costs

Specific costs of this solution depend heavily on overall volume: Frequency of execution, API calls, # of records checked, etc. Below is reference pricing for each service that could be utilized.

Logic Apps

Pricing page

  • Price Per Execution: $0.000025 (First 4,000 actions free)
  • Standard Connector: $0.000125 (Azure DevOps, Text Analytics)

Text Analytics

Pricing page

  • 0-500,000 text records — $1 per 1,000 text records
  • 0.5M-2.5M text records — $0.75 per 1,000 text records
  • 2.5M-10.0M text records — $0.30 per 1,000 text records
  • 10M+ text records — $0.25 per 1,000 text records

Text record: request up to 1,000 characters

Computer Vision

Pricing page

  • OCR pricing table
    • 0-1M transactions — $1 per 1,000 transactions
    • 1M-10M transactions — $0.65 per 1,000 transactions
    • 10M-100M transactions — $0.60 per 1,000 transactions
    • 100M+ transactions — $0.40 per 1,000 transactions

Disclaimer

The information in this document is designed to be a sample reference for the solution described. It does not imply an enterprise-ready solution architecture, nor represent a commitment to fully design and build the described solution.