Using Azure OpenAI and Logic Apps to Triage Bugs in Azure DevOps

There are other examples out there using the publicly-available OpenAI service, but what if you’re concerned about “pulling a Samsung” and inadvertently sending your company’s IP to the public service when trying to triage a bug report? A simple error message can accidentally provide context about what you’re working on, such as project names, file types, method names, and any other context you intend to use in your prompt.

OpenAI is an AI research lab and company known for their advanced language model GPT-3, while the Azure OpenAI Service is a specific offering within Microsoft Azure that integrates OpenAI’s technologies into the Azure platform, allowing developers to leverage their powerful AI models at scale. Azure OpenAI (AOAI) also helps keep your company’s data in your own Azure tenant.

In my little sample implementation, I used AOAI, a Logic App, and a small Azure Function to put together an “automatic AI bug triage” service hooked up to Azure DevOps. This approach provides a simple way to send problem information to OpenAI for the sake of solving a bug, without training the public OpenAI model.

First, get yourself access to the Azure OpenAI service by requesting it here.

Once you have your service provisioned, you’ll be ready to connect to it from your Azure Function (more on that shortly).

Azure Logic App

The scaffolding for this triage service is the Azure Logic App, which helps orchestrate the capturing of the bug submission, calling the Function, and putting the results back into the bug in Azure DevOps.

Below, you see the first steps of the Logic App. It’s triggered when a bug is created in Azure DevOps (you must specify your org and project for scope, an provide other conditions to trigger if you like), capture the value from the “Repro Steps” field. The value is then “scrubbed” to plain text.

First steps of the Azure Logic App
First steps of the Azure Logic App

Next, we call the TriageBug Azure Function and pass it the value of the ‘reprostep’s variable. After capturing the response from the Function in a variable ‘triagetext’, that value is updated into the original bug (work item). In this example, I put it in a custom “AI Triage” field.

Final steps of Azure Logic App
Final steps of Azure Logic App

Azure Function to Call Azure Open AI

Now, I could have skipped using a Function and just called the AOAI service directly via its REST endpoint, but why not modularize that a little, providing more development control over prompting (or prompt chaining) to hopefully a more accurate response from the service.

In my Function I used the Microsoft Semantic Kernel, which is a lightweight SDK which provides easy access to AOAI. An easy method to get an IKernal object:

  private static IKernel GetSemanticKernel()
  {
      var kernel = Kernel.Builder.Build();
      kernel.Config.AddAzureTextCompletionService(
          _OpenAIDeploymentName,
          _OpenAIEndpoint,
          _OpenAIKey
      );
      return kernel;
  }

You can, for example purposes, hard code the above ‘_OpenAIDeploymentName’, ‘_OpenAIEndpoint’, and ‘_OpenAIKey’ variables, but it’s recommended you pull those values from Key Vault.

I create a quick static method to call AOAI. I hard-coded the prompt to send to the service, but you could easily extend it to pull a prompt from somewhere else:

        static async Task<string> AskForHelp(string msg)
        {
            IKernel kernel = GetSemanticKernel();

            var prompt = @"As a software developer, identify potential causes, fixes, and recommendations 
for the following error message (include references as well): {{$input}}";

            var fnc = kernel.CreateSemanticFunction(prompt);

            var result = await fnc.InvokeAsync(msg);
            return result.ToString();

        }

The Function trigger was only a small edit from the base code created when using an HTTP trigger:

public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    ILogger log)
{
    log.LogInformation("TriageBug function processed a request.");

    string message = req.Query["message"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    message ??= data?.message;

    string responseMessage = await AskForHelp(message);

    return new OkObjectResult(responseMessage.Trim());
}

If I have the Logic App enabled and file a bug in ADO, the Logic App runs and I see something like this:

AI triage of bug in ADO
AI triage of bug in ADO

So there you go!

Is it perfect? Nope. There are plenty of things you can adjust/add here to make this more robust, such as:

  • Use more AI to “learn” what best prompt to use (based on extracted text) to get more accurate results.
  • Return the triage text as HTML to enable hyperlinks and better formatting.
  • Uh.. maybe a little error handling? 😀
  • Use more values from ADO (additional bug field values, project name, description, etc) to provide better context for the prompt.

I’m sure there are more. Hopefully this gets you started!

Simple Website Hosting on Azure (Drupal, WordPress, etc.)

Let’s say you have a boatload of websites that are currently running on a variety of platforms. Some are based on Drupal, while others are WordPress or static HTML. There are also some E-commerce web applications as well, and perhaps some that are custom-coded.

Of course, Azure App Service is by far the most powerful and flexible platform for your web endeavors, but if you want to keep the technology you currently have and save money, the two most likely hosting platforms for these types applications: Azure App Service and Azure Static Web Apps.

High-Level Considerations When Choosing an Azure Hosting Platform

 Azure App ServiceAzure Static Web Apps
Type of Web ApplicationBest for: CMS apps (Drupal, WordPress, etc.), E-commerce applications, custom web apps, APIs, Mobile backendsBest for: Custom small to medium-sized websites or web applications with limited number of pages and require no server-side functionality.
ProvisioningDeploy apps with templates – Azure App Service | Microsoft LearnTutorial: Publish Azure Static Web Apps using an ARM template | Microsoft Learn
DeploymentVarious forms of deployment approachesDirectly from GitHub or Azure DevOps
Security/AuthenticationAuthentication and authorization – Azure App Service | Microsoft LearnAuthenticate and authorize Static Web Apps | Microsoft Learn
MonitoringMonitor App Service with Azure Monitor – Azure App Service | Microsoft LearnMonitor Azure Static Web Apps | Microsoft Learn
Custom DomainsMap existing custom DNS name – Azure App Service | Microsoft LearnCustom domains with Azure Static Web Apps | Microsoft Learn
PricingWindows | LinuxStatic Web Apps pricing – Microsoft Azure
Quotas/LimitsAzure subscription limits and quotas – Azure Resource Manager | Microsoft LearnQuotas in Azure Static Web Apps | Microsoft Learn

Azure App Service

The most flexible of the two options is Azure App Service, in that it allows for virtually all kinds of web applications to be hosted. This includes CMS sites, custom web applications, static applications, APIs, mobile backends, and containers. The underlying operating system can be either Windows or Linux, depending on the requirements of the application.

The Azure Marketplace provides out-of-the-box support for CMS solutions like Drupal, WordPress, Umbraco, and more – all built on top of Azure App Service.

Learn more:

Azure Static Web Apps

Static web apps are commonly built using libraries and web frameworks like Angular, React, Svelte, Vue, or Blazor where server-side rendering isn’t required. These apps include HTML, CSS, JavaScript, and image assets that make up the application. With a traditional web server, these assets are served from a single server alongside any required API endpoints.

With Static Web Apps, static assets are separated from a traditional web server and are instead served from points geographically distributed around the world. This distribution makes serving files much faster as files are physically closer to end users. In addition, API endpoints are hosted using a serverless architecture, which avoids the need for a full back-end server altogether.

Learn more:

Basic Decision Tree

Below is a basic decision tree which can help teams decide which of the two platforms in Azure to use for a given use case.

Static Web Apps and App Service both have common traits:

  • Scale globally as needed
  • Host static content
  • Support continuous deployment
  • Custom domains
  • SSL certs
  • SSO authentication
  • Template, CLI, and automation
  • Monitoring

    Hope this helps!

    The Importance of Internal Developer Communities

    In my early days at Microsoft, I worked as part of our Developer & Platform Evangelism group, and later the Developer Experience group. As part of my role, I was fortunate to be able to interact with several developer communities around the US. Some of these were .NET User Groups, which were great! On the enterprise productivity side, I found helping install and run internal user groups and communities to be as equally beneficial.

    Internal developer communities are becoming increasingly popular as organizations recognize the value of promoting knowledge sharing and collaboration among their technical teams. These communities provide an avenue for developers to connect, learn from one another, and collectively contribute to the growth and success of the organization. Let’s look at some of the other core benefits of an internal developer community and provide some basic tips on how to create one.

    One of the primary benefits of an internal developer community is the opportunity for knowledge sharing. By bringing together developers with diverse backgrounds and experiences, you create a platform for sharing ideas, best practices, and solutions to common challenges. This can help to foster innovation and improve the quality of your codebase. Additionally, it can help to create a more cohesive and collaborative team culture.

    Another benefit of an internal developer community is the ability to promote continuous learning. As technology continues to evolve at a rapid pace, it’s essential that developers stay up-to-date with the latest tools, techniques, and trends. By providing a forum for learning and professional development, you can help your team members to stay current and improve their skills. This can, in turn, improve the quality of your products and services and enhance your organization’s reputation as an innovative and forward-thinking company.

    An internal developer community can also help to improve team morale and job satisfaction. By providing opportunities for collaboration and recognition, you create a sense of belonging and engagement that can help to retain top talent. Additionally, it can help to break down silos and promote cross-functional communication, which can improve team dynamics and foster a sense of community.

    So, how can you create an internal developer community? Here are some tips to get started:

    1. Establish clear goals and objectives for the community and communicate these to the team.
    2. Provide a platform for communication and collaboration, such as a dedicated Teams channel or GitHub repo.
    3. Encourage participation and contribution from all team members, regardless of seniority or experience level.
    4. Offer opportunities for learning and development, such as regular lunch and learns, workshops, or hackathons.
    5. Recognize and reward team members for their contributions to the community and the organization as a whole.

    Also, consider the size of your developer ecosystem when setting up your community. Is it small enough for a single community, or is the organization large enough to have an over-arching community with smaller, more technology-focused user groups?

    While another organization (like Microsoft ;)) can help initiate and facilitate these communities, the most successful ones are self-sustaining.

    An internal developer community can provide a range of benefits for your organization, from improving code quality and innovation to promoting team morale and retention. Good luck! And if you’d like to learn more, contact me to help create a thriving community that supports the growth and success of your technical teams.

    Microsoft App Innovation Newsletter: January 2023Updates & Events

    This is a recurring newsletter that I send directly to my customers. Shoot me a noted if you’d like to be added to my mailing list.

    First things first, Happy New Year! I hope you enjoyed some well-deserved “down time” with family and friends. But the world doesn’t stop turning, right? Back to it! 😉

    In this edition, please find information about upcoming events, product updates, and other interesting reads. Please reach out with any questions.

    (If you have others on your team that might benefit from emails like this, please forward and CC me. Appreciate it!)

    Good Reads

    Upcoming Events

    Click on “details & registration here” to get details and sign up for a particular event. For a broader list of events, check out the Azure Event Catalog.

    EventDate/Time/Register
    Microsoft Power Platform Virtual Training Day: Rapidly Build Apps01/18/2023 | 16:00 (MST) – 01/19/2023 | 18:00 (MST) Details & registration here 01/23/2023 | 07:00 (MST) – 01/24/2023 | 09:00 (MST) Details & registration here
    Microsoft Virtual Briefing – App Innovation01/26/2023 | 07:00 – 09:00 (MST) Details & registration here
    Microsoft Power Platform Virtual Training Day: Fundamentals02/08/2023 | 16:00 – 19:25 (MST) Details & registration here
    Microsoft Virtual Briefing – Sustainability02/13/2023 | 17:00 – 19:00 (MST) Details & registration here
    Manage Power Platform and Azure deployments with GitHubTuesday, February 14, 2023 9:00 AM Pacific Time (12:00 PM Eastern Time) Details & registration here
    New Breakpoint: Hey, GitHub! What’s new in the world of GitHub with Damian Brady02/15/2023 | 17:00 – 17:45 (MST) Details & registration here

    Be Successful with Microsoft Programs

    In addition to the events listed above, there are other more focused programs available to you and your group to help accelerate your application development and modernization efforts.

    • Microsoft Technology Centers (MTC) delivery immersive industry experiences and deep technical engagements focused on business outcomes. Experienced Microsoft architects help ideate on new projects, solve business problems, perform architectural reviews/designs, co-build rapid prototypes, run hackathons, and provide focused learnings on specific topics.
    • App Modernization Assessments provide holistic views of your application estate for cloud readiness, and recommendations for modernizing those apps to the Azure platform. Register for a webinar on Dec 7th to learn more.
    • Azure App of the Future is a 5-day engagement with a certified Microsoft partner to quickly get a new cloud-native app off the ground. This includes design, prototyping, and solution architecture.

    (All of these programs are fully funded by Microsoft. Let me know if you’d like to chat to learn more and get your organization enrolled!)

    As a reminder, as your Microsoft App Innovation Specialist I focus on helping customers innovate in a cloud-first world. My goal is to help teams:

    • Modernize applications to the cloud to speed delivery, increase resiliency, at the best cost.
    • Build disruptive new applications natively in Azure that are secure, intelligent, and reliable.
    • Maximize developer velocity with DevSecOps tooling and practices.
    • Integrate at scale historically disparate systems via integration and API services.

    Thanks again for reading, and please don’t be a stranger!

    Webinar: Optimize Costs with Application Modernization

    Today more than ever, Healthcare companies are looking to lower cost and speed up innovation through the lens of their applications.  Microsoft has created a team to help support this complex and time-consuming process of analyzing ALL the components of the application AND the hardware it’s running on, to provide a comprehensive recommendation and plan for a single application to an entire company’s portfolio of applications.

    Join us in this webinar to learn about:​

    • State of Healthcare application landscape
    • Application focused assessment that helps deliver financial, and a technical business case for each application
    • Programs and resources available to help accelerate modernization
    • Real-world success stories leveraging the Application Modernization Assessment

    How to engage directly with a Microsoft Application Innovation Specialist

    WHEN: Wed, 12/14, 2022, 9:30AM PST

    Register here

    Microsoft App Innovation Newsletter: December Updates & Events

    This is a recurring newsletter that I send directly to my customers. Shoot me a noted if you’d like to be added to my mailing list.

    As we approach December 2022, I have some great (curated) information for you to consider as you finish out this year and begin planning for 2023. Below you’ll find news about upcoming events, announcements, and funded programs that have helped other organizations be more innovative. Please reach out with any questions!

    Upcoming Events

    Click on “register now” to get details and sign up for a particular event. For a broader list of events, check out the Azure Event Catalog.

    EventDate/Time/Register
    Innovation Fireside Chat: Creating a new business model with SaaSWednesday, December 7, 2022 9:00 AM Pacific Time / 12:00 PM Eastern Time Register now
    Microsoft Power Platform Virtual Training Day: Fundamentals9th December 2022, 10:00-13:25 (GMT) Register now
    Microsoft Azure Virtual Training Day: Migrating On-Premises Infrastructure and DataDecember 7, 2022 | 12:00 PM CST December 8, 2022 | 12:00 PM CST Register now  
    Building New Age Business Apps with Low Code and Traditional DevelopmentDecember 13, 2022 1:00 PM – 2:00 PM ET | 10:00 AM – 11:00 AM PT Register now
    Microsoft Azure Virtual Training Day: Cloud-Native AppsDecember 13, 2022 | 12:00 PM CST December 14, 2022 | 12:00 PM CST Register now

    Announcements & Links

    Be Successful with Microsoft Programs

    In addition to the events listed above, there are other more focused programs available to you and your group to help accelerate your application development and modernization efforts.

    • Microsoft Technology Centers (MTC) delivery immersive industry experiences and deep technical engagements focused on business outcomes. Experienced Microsoft architects help ideate on new projects, solve business problems, perform architectural reviews/designs, co-build rapid prototypes, run hackathons, and provide focused learnings on specific topics.
    • App Modernization Assessments provide holistic views of your application estate for cloud readiness, and recommendations for modernizing those apps to the Azure platform. Register for a webinar on Dec 7th to learn more.
    • Azure App of the Future is a 5-day engagement with a certified Microsoft partner to quickly get a new cloud-native app off the ground. This includes design, prototyping, and solution architecture.

    (All of these programs are fully funded by Microsoft. Let me know if you’d like to chat to learn more and get your organization enrolled!)

    As a reminder, as your Microsoft App Innovation Specialist I focus on helping customers innovate in a cloud-first world. My goal is to help teams:

    • Modernize applications to the cloud to speed delivery, increase resiliency, at the best cost.
    • Build disruptive new applications natively in Azure that are secure, intelligent, and reliable.
    • Maximize developer velocity with DevSecOps tooling and practices.
    • Integrate at scale historically disparate systems via integration and API services.

    Thanks again for reading, and please don’t be a stranger!

    Fixing Link Types in Bulk in Azure Boards (Azure DevOps)

    I’ve had a few customers over the years that have needed to bulk change link types in Azure Boards (i.e. change the types of relationship between two work items, but at scale). How does someone get into such a situation?

    Several reasons:

    • Performed bulk imports of work items and the work item hierarchy was misconfigured.
    • Performed bulk imports of work items and test cases (to be fair, also work items) which don’t recognize “tested/tested by” link types (only “parent/child”).
    • Someone on the team was incorrectly creating links (maybe creating “related” links instead of “dependent”), and now it’s a pain to undo that.
    • I’m sure there are more.

    It’s typically the first or second one – an unexpected result from a work item import (usually from Excel). The import work items function only creates parent/child links, so if you’re trying to import test cases (for example), they won’t be recognized as tests that cover the user stories.

    Like this:

    Sample spreadsheet for work items to import into ADO.
    Sample spreadsheet of work items to be imported into Azure DevOps

    If you import this spreadsheet, you get test cases that are “children” of the user stories. Not desirable.

    Whatever the reason, misconfigured links in Azure Boards can created unintended experiences (tasks not showing up on task boards, stories not properly rolling up to epics, etc.)

    For some organizations, they can just assign someone to walk through all the work items with links that need to be redone, delete the link, and create a new one with the appropriate link type. But that can be incredibly time-consuming. That said, there is a newly released (Q4 2022) feature that allows a user to simply change the link type (rather than delete & re-create), but that’s still a one-by-one process. If you’ve just imported 2000 work items, that could be hundreds of links to change.

    Programmatically Changing Link Types

    I figured there’s got to be a way to make these bulk changes in a more automated way. So, I dove into the Azure DevOps SDK and wrote a simple console app to accomplish this.

    You can see the full example on GitHub here

    You effectively set the following values in the config.json file (to be passed as the only argument to the console application):

    {
      "orgName": "<org_name>",
      "projectName": "<project name>",
      "sourceWorkItemType": "<source work item type>",
      "targetWorkItemType": "<target work item type>",
      "sourceLinkType": "<link type to look for>",
      "targetLinkType": "<link type to replace with>",
      "personalAccessToken": "<pat>"
    }

    More on link types: Link types reference guide – Azure Boards | Microsoft Learn

    For example, if you specify:

    {
      "orgName": "MyKillerOrg",
      "projectName": "My Killer Project",
      "sourceWorkItemType": "User Story",
      "targetWorkItemType": "Test Case",
      "sourceLinkType": "System.LinkTypes.Hierarchy-Forward",
      "targetLinkType": "Microsoft.VSTS.Common.TestedBy-Forward",
      "personalAccessToken": "<your pat here>"
    }

    This means you want the app to find all user stories that have parent-child links to test cases and change them to have tests/tested by links.

    If you know me or have seen my LinkedIn profile, you know I’m not a professional developer – so you don’t get to make fun of my code. But it works on my machine!

    You can see the full example on GitHub here.

    Let’s take a look at the important parts:

    In LinkWorker.cs:

    The ProcessWorkItems method connects to Azure DevOps, builds and runs a WIQL query to get the work items.

    public List<WorkItem> QueryWorkItems()
            {
                // create a wiql object and build our query
                var wiql = new Wiql()
                {
                    Query = "Select [Id] " +
                            "From WorkItems " +
                            "Where [Work Item Type] = '" + sourceWIT + "' " +
                            "And [System.TeamProject] = '" + projectName + "' " +
                            "Order By [Id] Asc",
                };
                // execute the query to get the list of work items in the results
                var result = witClient.QueryByWiqlAsync(wiql).Result;
                var ids = result.WorkItems.Select(item => item.Id).ToArray();
    
                // get work items for the ids found in query
                return witClient.GetWorkItemsAsync(ids, expand: WorkItemExpand.Relations).Result;
            }

    It then finds which of the returned work items have linked items that meet the criteria.

     // loop though work items
                foreach (var wi in workItems)
                {
                    Console.WriteLine("{0}: {1}", wi.Id, wi.Fields["System.Title"]);
                    targetRelationIndexes = new();
                    targetWorkItemIds = new();
    
                    if (wi.Relations != null) 
                    {
                        WorkItemRelation rel;
                        for (int i = 0; i < wi.Relations.Count; i++)
                        {
                            rel = wi.Relations[i];
                            if (rel.Rel == sourceLinkType) 
                            {
                                var linkedItem = witClient.GetWorkItemAsync(GetWorkItemIdFromUrl(rel.Url)).Result;
                                if (linkedItem.Fields["System.WorkItemType"].ToString() == targetWIT)
                                {
                                    targetRelationIndexes.Add(i);
                                    targetWorkItemIds.Add(Convert.ToInt32(linkedItem.Id));
                                }
                            }
                        }
                    }
                    if (targetRelationIndexes.Count > 0)
                    {
                        Console.WriteLine("\tFound {0} links to update.", targetRelationIndexes.Count);
                        // Remove current links
                        BulkRemoveSourceRelationships(Convert.ToInt32(wi.Id), targetRelationIndexes);
                        // Add new links
                        BulkAddTargetLinks(Convert.ToInt32(wi.Id), targetWorkItemIds);
                    }
                    else
                    {
                        Console.WriteLine("\tNo links found to update.");
                    }
                }

    It’s worth noting that we need to keep a list of link indexes AND work item IDs, as you need the former to remove the existing link (uses the index of the linked relationship for reference) and the latter to create the new link with the desired link type.

    For each work item relationship that meets the criteria, BulkRemoveSourceRelationships is called to delete the existing relationship. Subsequently, BulkAddTargetLinks creates a new relationship between the 2 work items with the specified (correct) link type. Each method uses the PatchDocument approach.

    WorkItem BulkRemoveSourceRelationships(int sourceId, List<int> indexes)
            {
                JsonPatchDocument patchDocument = new JsonPatchDocument();
                foreach (int index in indexes)
                {
                    patchDocument.Add(
                        new JsonPatchOperation()
                        {
                            Operation = Operation.Remove,
                            Path = string.Format("/relations/{0}", index)
                        }
                    );
                }
                return witClient.UpdateWorkItemAsync(patchDocument, sourceId).Result;
            }
     WorkItem BulkAddTargetLinks(int sourceId, List<int> targetIds)
            {
                WorkItem targetItem;
                JsonPatchDocument patchDocument = new JsonPatchDocument();
                foreach (int id in targetIds)
                {
                    targetItem = witClient.GetWorkItemAsync(id).Result;
                    patchDocument.Add(
                       new JsonPatchOperation()
                       {
                           Operation = Operation.Add,
                           Path = "/relations/-",
                           Value = new
                           {
                               rel = targetLinkType,
                               url = targetItem.Url,
                               attributes = new
                               {
                                   comment = "Making a new link for tested/tested by"
                               }
                           }
                       }
                    );
                }
                return witClient.UpdateWorkItemAsync(patchDocument, sourceId).Result;
            }

    That’s really about it. In my test against roughly 100 work items, it took about 15 seconds to run.

    Again you can see the full (SAMPLE) code in the GitHub repo (yes, it’s called GoofyLittleLinkChanger).

    Questions? Thoughts? Thanks for reading!

    Key App Innovation Announcements at Ignite 2022

    Did you make it to Ignite 2022? Yes? No?

    Either way, I’ve put together some links to key sessions and announcements from the conference.

    Key Announcements

    Ignite 2022 came with a slew of announcements in areas of development and innovation. My highlights include:

    (You can read about all of the announcements from Ignite in the Ignite 2022 Book of News)

    On-Demand Sessions

    Application Modernization

    Cloud-Native

    Low/No Code

    Developer Tools

    I (or others on my team) would be happy to walk through some of these announcements in more detail with you and your team!

    So what was your favorite session?

    Migrating from TFS (or ADO Server) to Azure DevOps

    So you have Team Foundation Server or Azure DevOps Server (the new name for TFS as of 2019-ish) and you’ve decided to get that thing up to Azure (specifically, Azure DevOps Services). Great! Now what? This post will walk you through the most common options and approaches and provide some gotchas along the way.

    (For the sake of being concise, I’m going to refer to the on-premises instance as “TFS”. But know that this is interchangeable with ADO Server. Azure DevOps Services (the cloud-hosted version) will be referred to as “ADO”.)

    TFS & ADO – What’s the Difference?

    First, it’s important to understand the main differences between TFS and ADO. This Learn article goes into plenty more detail, but the key differences are:

    FeatureTFSADO
    Server ManagementYou manageMicrosoft manages
    ExpendituresCapital (servers, licenses)Operational (subscriptions)
    AuthenticationWindows Authentication/Active DirectoryMicrosoft Account or Azure Active Directory (highly recommended)
    ReportingSQL Server Analysis ServicesAnalytics Service
    Basic differences between TFS and ADO

    There are differences in nomenclature as well. For example, in TFS the root container of projects (of a single deployment) is a Project Collection. In ADO, it’s an organization.

    Migration – An Honest Discussion

    When migrating to ADO, most people automatically look to move everything they have in TFS into ADO. We always recommend having a heart-to-heart with your teams to see if that’s actually necessary. I bet you have some old projects in there that aren’t used anymore, or branches of code that need to die on the vine. Do you absolutely need to bring over all that dead weight into your new environment? What you decide to do will impact your migration approach.

    Weighing your Migration Options

    There are several approaches to migrating from TFS to ADO. Let’s walk through the main ones (FYI, the first 3 options discussed here are also covered in MS Learn documentation):

    Start Fresh

    Maybe you don’t need to bring everything you’ve ever put into TFS to ADO. If you are fine with cutting over with your most important assets from TFS and are willing to abandon a decent amount of history (and baggage), manually importing stuff is your best route.

    Typically, the most common assets to keep are source code (and even then, not all history, but either current revisions or key labeled code). Pipelines (esp. if not YAML-based), test plans, etc. are more difficult to migrate.

    This is by far the easiest approach, but the lowest fidelity.

    Use the Migration Tool

    Let’s say you decide your team needs to bring over as much as possible, including history of source code, work items, etc.

    Microsoft provides a high-fidelity migration tool to help here. All the nitty-gritty details about the migration tooling and surrounding process are documented in this migration guide.

    The migration guide goes over all the details, but here are the key limitations and conditions you need to be aware of:

    • Your version of TFS needs to be within 2 releases (minor releases) of the current version. This means you definitely need to be on Azure DevOps Server. As of this writing, you need to be on Azure DevOps Server 2020.1.1 or later. If you’re behind, you’ll need to upgrade your on-premises installation first. This documentation is updated to reflect what version you need to have in order to get support in using the migration tool.
    • If you’re not already using it, you’ll need to get your identities migrated or synchronized with Azure Active Directory.
    • The migration tool will only migrate/import data into a new, empty ADO organization. You can’t use this tool to consolidate orgs or deployments.
    • If your existing backend SQL database is too big, the migration tool will direct you to use an alternate method to bring in your database (TLDR it involves setting up a SQL Azure VM & importing that way).
    • There are a few items that the migration tool will NOT import:
      • Extensions
      • Services hooks
      • Load test data
      • Mentions
      • Project Server Integrations (doesn’t exist for ADO)

    Again, the migration guide (https://aka.ms/AzureDevOpsImport) is your friend.

    Go the API Route

    This approach sits in between “Start Fresh” and “Migrate” in terms of fidelity. You can utilize the Azure DevOps APIs to roll your own migration utility, or leverage some 3rd party options (such as OpsHub or TaskTop).

    This option also carries its pros and cons. It’s more work for you, but you get more control. A 3rd party route will carry a cost.

    Get Help

    Microsoft has a terrific partner ecosystem that can be called upon to help you with migration as well. They have a ton of experience with the approaches I called out above, and some may have their own tooling or methods as well. If you’d like help connecting to one of those partners, reach out to your Microsoft contact (or ping me if you’re not sure who that is).

    Wrapping Up

    I hope you find this helpful when considering your move from TFS (or ADO Server) to Azure DevOps Services. If you have other tools you are thinking about migrating from, there may be options for those as well: but that’s for another blog post in the future.

    Have a look at the below links for additional information on this topic.

    Speed up your AKS Deployment with the AKS Landing Zone Accelerator!

    Whether you’re new to Kubernetes, or have experience building K8S environments, you know (or will know soon!) building a complete Kubernetes operational environment is hard work!

    As you look at Azure Kubernetes Service, the Azure team has done a lot of work to make the process of this work easier and faster, while remaining secure. Introducing the AKS Landing Zone Accelerator!

    AKS Accelerator dramatically speeds up this work by providing the templates and deployment scripts to quickly create a fully configured, Kubernetes environment, tailored to meet your operational and security needs, ready to run your workloads in production.

    How exactly does this help?

    • It provides reference architectures, as well as landing zone architectures to remove guesswork, based on the AKS baseline, all in a GitHub repository.
    • The ability to templatize and customize deployments with environment variables.
    • Design guidelines and checklists.
    • The AKS construction helper to walk through the creation of the AKS operational environment.

    The AKS Construction helper is awesome – I highly recommend you check it out!

    AKS Construction helper (AKS Construction helper (azure.github.io))

    It will walk you through what you want to create, and then give you either a Bicep script or GitHub Actions YAML file to use..

    Enjoy!