Getting TFVC Repository Structure via Azure DevOps Server API

Getting TFVC Repository Structure via Azure DevOps Server API

Recently, I was asked an interesting question by a developer who was struggling with Azure DevOps Server APIs around fetching repository metadata for legacy TFVC structures as part of a GitHub migration from ADO Server. This was a nice little problem to solve because, let’s be honest, we don’t really deal with these legacy TFVC repositories much anymore. Most teams have migrated to Git, and the documentation around TFVC API interactions has become somewhat sparse over the years.

The challenge was straightforward but frustrating: they could retrieve project information just fine, but getting the actual TFVC folder structure within each project? That’s where things got tricky. After doing a bit of digging through the API documentation and testing different approaches, I’m happy to say that yes, it is absolutely possible to enumerate all TFVC repositories and their folder structures programmatically.

This blog post shares the solution I put together - a practical approach to retrieve TFVC repository structure using the Azure DevOps Server REST APIs. If you’re working with legacy TFVC repositories and need to interact with them programmatically, this one’s for you.

The Challenge: Understanding TFVC API Limitations

Unlike Git repositories where each project can contain multiple repos, TFVC follows a different model where each project contains exactly one TFVC repository. This fundamental difference affects how you interact with the API and retrieve repository information.

The main challenge developers face is distinguishing between project metadata and actual TFVC repository structure. When calling the standard Projects API, you receive project information but not the folder structure within the TFVC repository itself.

Read more
How We United 8 Developers Across Restricted Environments Using Azure VMs and Dev Containers

How We United 8 Developers Across Restricted Environments Using Azure VMs and Dev Containers

Introduction: When Traditional Solutions Hit a Wall

Last month, I found myself facing a challenge that I’m sure many of you have encountered: How do you enable seamless collaboration for a development team when half of them work in a locked-down environment where they can’t install any development tools, and the other half can’t access the client’s systems?

Our team of eight developers was tasked with building a proof-of-concept (PoC) for an AI-powered agentic system using Microsoft’s AutoGen framework. Here’s the kicker: this was a 3-week PoC sprint bringing together two teams from different organizations who had never worked together before. We needed a collaborative environment that could be spun up quickly, require minimal setup effort, and allow everyone to hit the ground running from day one.

The project requirements were complex enough, but the real challenge? Four developers worked from a highly restricted corporate environment where installing Python, VS Code, or any development tools was strictly prohibited. The remaining four worked from our offices but couldn’t access the client’s internal systems directly.

We tried the usual approaches:

  • RDP connections: Blocked by security policies
  • VPN access: Denied due to compliance requirements
  • Local development with file sharing: Immediate sync issues and “works on my machine” problems
  • Cloud IDEs: Didn’t meet the client’s security requirements

Just when we thought we’d have to resort to the dreaded “develop locally and pray it works in production” approach, we discovered a solution that not only solved our immediate problem but revolutionized how we approach distributed development.

The Architecture That Worked For Us

Here’s a visual representation of what we built, everyone had to work on their personal (non-corporate) laptops for this to work.

flowchart TD
    A["� 8 Developers on Personal Laptops
4 Restricted + 4 External Teams"] B["� SSH + VS Code Remote Connection
Certificate-based Authentication"] C["☁️ Azure VM (Standard D8s v3)
8 vCPUs • 32GB RAM • Ubuntu 22.04"] D["👤 Individual User Accounts
user1, user2, user3... user8"] E["🐳 Shared Dev Container
Python 3.12 + AutoGen + Azure CLI
All Dependencies Pre-installed"] F["📂 Shared Development Resources
• Project Repository
• Datasets & Models
• Configuration Files"] G["✅ Results Achieved
94% Faster Onboarding
$400/month vs $16k laptops
Enhanced Security"] A --> B B --> C C --> D D --> E E --> F F --> G style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000 style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000 style C fill:#e1f5fe,stroke:#0277bd,stroke-width:3px,color:#000 style D fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000 style E fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000 style F fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000 style G fill:#e8f5e8,stroke:#388e3c,stroke-width:3px,color:#000

Lets check out how this was built and setup…

Read more
Custom Voices in Azure OpenAI Realtime with Azure Speech Services

Custom Voices in Azure OpenAI Realtime with Azure Speech Services

Building realtime voice-enabled applications with Azure OpenAI’s GPT-4o Realtime model is incredibly powerful, but there’s one significant limitation that can be a deal-breaker for many use cases: you’re stuck with OpenAI’s predefined voices like “sage”, “alloy”, “echo”, “fable”, “onyx”, and “nova”.

What if you’re building a branded customer service bot that needs to match your company’s voice identity? Or developing a therapeutic application for children with autism where the voice quality and tone are crucial for engagement? What if your users need to interrupt the assistant naturally, just like in real human conversations?

In this comprehensive guide, I’ll show you exactly how I solved these challenges by building a hybrid solution that combines the conversational intelligence of GPT-4o Realtime with the voice flexibility of Azure Speech Services. We’ll dive deep into the implementation, covering everything from the initial problem to the complete working solution.

flowchart TD
    A[👤 User speaks] --> B[🎤 Microphone Input]
    B --> C{Barge-in Detection
Audio Level > Threshold?} C -->|Yes| D[🛑 Stop Azure Speech] C -->|No| E[📡 Stream to GPT-4o Realtime] E --> F[🧠 GPT-4o Processing] F --> G[📝 Text Response
ContentModalities.Text] G --> H[🗣️ Azure Speech Services
Custom/Neural Voice] H --> I[🔊 Audio Output] D --> E I --> J[👂 User hears response] J --> A style A fill:#e1f5fe style D fill:#ffebee style G fill:#f3e5f5 style H fill:#e8f5e8 style I fill:#fff3e0
Read more
Ignoring Azurite Files

Ignoring Azurite Files

In the old days, developers relied on the Azure Storage Emulator to emulate Azure Storage services locally. However, Azure Storage Emulator has been deprecated and replaced with Azurite, which is now the recommended way to emulate Azure Blob, Queue, and Table storage locally. In this post, let’s see how to set up exclusions in Visual Studio Code to prevent unwanted Azurite files from cluttering your workspace while working with Function Apps.

Azurite files

Read more
Extracting GZip & Tar Files Natively in .NET Without External Libraries

Extracting GZip & Tar Files Natively in .NET Without External Libraries

Introduction

Imagine being in a scenario where a file of type .tar.gz lands in your Azure Blob Storage container. This file, when uncompressed, yields a collection of individual files. The trigger event for the arrival of this file is an Azure function, which springs into action, decompressing the contents and transferring them into a different container.

In this context, a team may instinctively reach out for a robust library like SharpZipLib. However, what if there is a mandate to accomplish this without external dependencies? This becomes a reality with .NET 7.

In .NET 7, native support for Tar files has been introduced, and GZip is catered to via System.IO.Compression. This means we can decompress a .tar.gz file natively in .NET 7, bypassing any need for external libraries.

This post will walk you through this process, providing a practical example using .NET 7 to show how this can be achieved.

.NET 7: Native TAR Support

As of .NET 7, the System.Formats.Tar namespace was introduced to deal with TAR files, adding to the toolkit of .NET developers:

  • System.Formats.Tar.TarFile to pack a directory into a TAR file or extract a TAR file to a directory
  • System.Formats.Tar.TarReader to read a TAR file
  • System.Formats.Tar.TarWriter to write a TAR file

These new capabilities significantly simplify the process of working with TAR files in .NET. Lets dive in an have a look at a code sample that demonstrates how to extract a .tar.gz file natively in .NET 7.

Read more
Unzipping and Shuffling GBs of Data Using Azure Functions

Unzipping and Shuffling GBs of Data Using Azure Functions

Consider this situation: you have a zip file stored in an Azure Blob Storage container (or any other location for that matter). This isn’t just any zip file; it’s large, containing gigabytes of data. It could be big data sets for your machine learning projects, log files, media files, or backups. The specific content isn’t the focus - the size is.

The task? We need to unzip this massive file(s) and relocate its contents to a different Azure Blob storage container. This task might seem daunting, especially considering the size of the file and the potential number of files that might be housed within it.

Why do we need to do this? The use cases are numerous. Handling large data sets, moving data for analysis, making backups more accessible - these are just a few examples. The key here is that we’re looking for a scalable and reliable solution to handle this task efficiently.

Azure Data Factory is arguably a better fit for this sort of task, but In this blog post, we will specifically demonstrate how to establish this process using Azure Functions. Specifically we will try to achieve this within the constraints of the Consumption plan tier, where the maximum memory is capped at 1.5GB, with the supporting roles of Azure CLI and PowerShell in our setup.

Setting Up Our Azure Environment

Before we dive into scripting and code, we need to set the stage - that means setting up our Azure environment. We’re going to create a storage account with two containers, one for our Zipped files and the other for Unzipped files.

To create this setup, we’ll be using the Azure CLI. Why? Because it’s efficient and lets us script out the whole process if we need to do it again in the future.

  1. Install Azure CLI: If you haven’t already installed Azure CLI on your local machine, you can get it from here.

  2. Login to Azure: Open your terminal and type the following command to login to your Azure account. You’ll be prompted to enter your credentials.

    1
    az login    
  3. Create a Resource Group: We’ll need a Resource Group to keep our resources organized. We’ll call this rg-function-app-unzip-test and create it in the eastus location (you can ofcourse choose which ever region you like).

    1
    az group create --name rg-function-app-unzip-test --location eastus    
Read more
Azure DevTest Labs Policies

Azure DevTest Labs Policies

Azure DevTest Labs offers a powerful cloud-based development workstation environment and great alternative to a local development workstation/laptop when it comes to software development. This blog post is not so much talking about the benefits of DevTest Lab, but more about how to create policies for DevTest Labs using Bicep. Although there is a good support for deploying DevTest labs with Bicep, there is little to no documentation when it comes to creating policies for DevTest Labs in Bicep. In this blog post, we will focus on creating policies for DevTest Labs using Bicep and how to go about doing this.

A Brief Overview of Azure DevTest Labs

Azure DevTest Labs is a managed service that enables developers to quickly create, manage, and share development and test environments. It provides a range of features and tools designed to streamline the development process, minimize costs, and improve overall productivity. By leveraging the power of the cloud, developers can easily spin up virtual machines (VMs) pre-configured with the necessary tools, frameworks, and software needed for their projects.

Existing Documentation Limitations

While the existing documentation covers various aspects of Azure DevTest Labs, it lacks clear guidance on setting up policies with DevTest Labs in Bicep. This blog post aims to address that gap by providing a Bicep script for creating a DevTest Lab and applying policies to it. Shout out to my colleague Illian Y for persisting and not giving up and finding a away around undocumented features and showing me.

Read more
Azure Logic Apps Timeout

Azure Logic Apps Timeout

Recently I got pulled into a production incident where a logic app was running for a long time (long time in this scenario was > 10 minutes), but the intention from the dev crew was they wanted this to time out in 60 seconds. These logic apps were a combination of HTTP triggers and Timer based.

Logic App Default Time Limits

First things to keep in mind are some default limits.

  1. If its a HTTP based trigger the default timeout is around 3.9 minutes

  2. For most others the default max run duration of a logic app is 90 days and min is 7 days

Ways To Change Defaults

With that, here are a couple of quick ways to make sure your Logic App times out and terminates within the time frame you set. Lets say if we want our Logic App to run no more than 60 seconds at max then:

Read more
Create A Multi User Experience For Single Threaded Applications Using Azure Container Apps

Create A Multi User Experience For Single Threaded Applications Using Azure Container Apps

How to make a single-threaded app multi-threaded? This is the scenario I faced very recently. These were legacy web app(s) written to be single-threaded; in this context single-threaded means can only serve one request at a time. I know this goes against everything that a web app should be, but it what it is.

So if we have a single threaded web app (legacy) now all of a sudden we have a requirement to support multiple users at the same time. What are our options:

  1. Re-architect the app to be multi threaded
  2. Find a way to simulate multi threaded behavior

Both are great options, but in this scenario option 1 was out, due to the cost involved in re-writing this app to support multi threading. So that leaves us with option 2; how can we at a cloud infra level easily simulate multi threaded behavior. Turns out if we containerize the app (in this case it was easy enough to do) we orchestrate the app such that for each http request is routed to a new container (ie: every new http request should spin up a new container and request send to it)

Options For Running Containers

So when it comes to running a container in Azure our main options are below

Read more
Application Gateway Ingress Controller For AKS

Application Gateway Ingress Controller For AKS

Recently I ran into an interesting issue with an AKS cluster running 2000+ services. There is nothing wrong in running 2000+ services that’s what Kubernetes is there for, scale! but the interesting aspect that caught my attention was trying to get the Applicaiton Gateway Ingress Controller (AGIC) to ingress to all these services. I had worked with Istio and NGINX for ingress into AKS with no issues and never AGIC, so I had to try this to see where it worked well, what the advantages are and where the limitations are.

Application Gateway

Application Gateway (App Gateway) is a well-established layer 7 service that has been around for a while, some of the major features are:

  • URL routing
  • Cookie-based affinity
  • SSL termination
  • End-to-end SSL
  • Support for public, private, and hybrid web sites
  • Integrated web application firewall
  • Zone redundancy
  • Connection draining

This post isn’t focused on the App Gateway itself, it’s more about how and what it can do as an ingress controller for AKS. You can find out more about App Gateway and all abouts its features here

Read more