Running FLUX.1 OmniControl on a Consumer GPU: A Docker Implementation tested on RTX 3060

Running FLUX.1 OmniControl on a Consumer GPU: A Docker Implementation tested on RTX 3060


🎯 TL;DR: Subject-Driven Image Generation on 12GB VRAM

Large AI models like FLUX.1-schnell typically require datacenter GPUs with 48GB+ VRAM. Problem: Most developers and hobbyists only have access to consumer RTX cards which vary from 6 - 12GB VRAM in most cases (with the exception of the expensive 4090/5090 cards which can go up to 32gb).

Solution: Using mmgp (Memory Management for GPU Poor) with Docker containerization enables FLUX.1 OmniControl to run on RTX 3060 12GB through 8-bit quantization, dynamic VRAM/RAM offloading, and selective layer loading. The implementation provides a Gradio web interface generating 512x512 images in ~10 seconds after initial model loading, with models persisting in system RAM to avoid reload overhead.

Technical Approach: Profile 3 configuration quantizes the T5 text encoder (8.8GB → ~4.4GB), pins the FLUX transformer (22.7GB) to reserved system RAM, and dynamically loads only active layers to VRAM during inference. Tested and validated on RTX 3060 12GB with 64GB system RAM running Windows 11 + WSL2 + Docker Desktop.

Complete Implementation: All code, Dockerfile, and setup instructions are available at github.com/Ricky-G/docker-ai-models/omnicontrol


Recently, I wanted to experiment with OmniControl, a subject-driven image generation model that extends FLUX.1-schnell with LoRA adapters for precise control over object placement. The challenge? The model requirements listed 48GB+ VRAM, and I only had an RTX 3060 with 12GB sitting in my workstation.

This is a common frustration in the AI development community. Research papers showcase impressive results on expensive datacenter hardware, but practical implementation on consumer GPUs requires significant engineering effort. Could I actually run this model locally without upgrading to an RTX 4090/5090 or pay for a VM in Azure with A100?

The answer turned out to be yes - with some clever memory management and containerization. This blog post walks through the complete process of dockerizing OmniControl to run efficiently on a 12GB consumer GPU.

Read more
Microsoft Foundry Cross-Region with Private Endpoints (Part 1)

Microsoft Foundry Cross-Region with Private Endpoints (Part 1)


🎯 TL;DR: Deploy Microsoft Foundry Cross-Region with Private Endpoints

Microsoft Foundry isn’t available in every Azure region, but data residency requirements often mandate that all data at rest stays within specific regions. This post demonstrates how to keep your data in your compliant region (e.g., New Zealand North) while leveraging Microsoft Foundry in another region (e.g., Australia East) purely for AI inferencing. Using cross-region Private Endpoints over Azure’s backbone network, applications securely access Foundry’s AI capabilities without data traversing the public internet—maintaining both regional compliance and zero-trust security posture.

The Solution: All data at rest, applications, and Private Endpoints remain in NZN. Microsoft Foundry deployed in AUE provides AI inferencing only. Private connectivity ensures secure, compliant architecture across regions.


When deploying Microsoft Foundry (formerly Azure AI Foundry) in enterprise environments, you’ll face a critical constraint: Microsoft Foundry isn’t available in every Azure region, yet data residency requirements mandate that all data at rest remains within specific regions.

Imagine this scenario: Your organization must keep all data in New Zealand North due to regulatory compliance, but Microsoft Foundry is only available in Australia East. You can’t move data to AUE, but you need Foundry’s AI capabilities. How do you maintain compliance while accessing AI inferencing services?

The solution is architectural: Keep all data at rest in your compliant region (NZN) and use Microsoft Foundry in the available region (AUE) purely for AI inferencing. By deploying cross-region Private Endpoints, applications in NZN securely access Foundry’s AI services over Azure’s backbone network—no public internet, no data residency violations, no compromises.

This guide walks through the complete architecture, DNS configuration, security considerations, and implementation steps for deploying this cross-region private endpoint pattern.

⚠️ Important: Foundry Agents Service Limitation

If you plan to use the Foundry Agents service specifically, there is a known limitation at the time of writing: all Foundry workspace resources (Cosmos DB, Storage Account, AI Search, Foundry Account, Project, Managed Identity, Azure OpenAI, or other Foundry resources used for model deployments) must be deployed in the same region as the VNet.

This means the cross-region pattern described in this post will not work for Foundry Agents deployments—you would need to deploy everything in the same region (e.g., all resources in Australia East where Foundry is available).

However, if you are NOT using the Foundry Agents service (i.e., you’re only using Foundry for AI inferencing via API calls—OpenAI models, Speech Services, Vision, etc.), then the cross-region private endpoint pattern works perfectly, and all your data can reside in your chosen compliant region as described in this post.

For more details, see Microsoft Learn - Virtual Networks with Foundry Agents - Known Limitations

flowchart TB
    subgraph azure["☁️ Azure Backbone"]
        direction TB
        subgraph NZN["🌏 NZN - Data Residency Region"]
            direction TB
            subgraph vnet["VNet:  10.1.0.0/16"]
                subgraph appsnet["Subnet: snet-apps • 10.1.1.0/24"]
                    client[👤 Client App / VM
10.1.1.10] data[(💾 Data at Rest
Storage, SQL, etc.)] end subgraph pesnet["Subnet: snet • 10.1.2.0/24"] pe[🔒 Private Endpoint
10.1.2.4] end end dns[🔐 Private DNS Zones
Resolves to Private IP] end subgraph AUE["🌏 AUE - AI Inferencing"] foundry[[🤖 Microsoft Foundry
myFoundry. cognitiveservices.azure.com]] end pe ==>|"🔐 Private Link
"| foundry end internet[/"🌐 Public Internet
❌ Blocked"/] client --> dns dns -.->|10.1.2.4| pe client -->|HTTPS| pe foundry -.-x internet style azure fill:#f5f5f5,stroke:#666,stroke-width:2px,stroke-dasharray: 5 5 style NZN fill:#e3f2fd,stroke:#1976d2,stroke-width:3px style AUE fill:#e8f5e9,stroke:#388e3c,stroke-width:3px style internet fill:#ffebee,stroke:#c62828,stroke-width:2px style vnet fill:#e1f5fe,stroke:#0288d1,stroke-width: 2px style dns fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px style pe fill:#fff3e0,stroke:#ef6c00,stroke-width:3px style data fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
Read more
Pimp My Terminal - Terminal Customization with Oh My Posh - A Cloud Native Terminal Setup

Pimp My Terminal - Terminal Customization with Oh My Posh - A Cloud Native Terminal Setup


🎯 TL;DR: Automated Oh My Posh Terminal Setup for Cloud Native Development

Every new machine or fresh Windows install means reconfiguring your terminal environment from scratch. Problem: Manually setting up Oh My Posh, installing Nerd Fonts, and configuring custom themes is tedious and error-prone across multiple machines.

Solution: (A single PowerShell script available on GitHub https://github.com/Ricky-G/script-library/blob/main/pimp-my-terminal.ps1) that automates the entire process - installing Oh My Posh via winget, deploying a Nerd Font, Terminal-Icons module, creating a custom “Cloud Native Azure” theme optimized for Kubernetes and Azure workflows, and configuring your PowerShell profile with PSReadLine enhancements.

Prerequisites: Enable script execution with Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser before running. This approach transforms the multi-hour setup process into a one-command operation, providing immediate visual context for Git branches, Kubernetes clusters, Azure subscriptions, and command execution times - critical information for modern cloud native development.


Recently, I found myself setting up yet another development machine, and as I stared at the blank PowerShell terminal, I realized I’d reached my limit with manual terminal configuration. Every new machine or clean install meant the same tedious process: download Oh My Posh, find a Nerd Font installer, copy configuration files, edit PowerShell profiles, and spend 30 minutes getting everything just right.

The frustration wasn’t just about aesthetics - a properly configured terminal is a productivity multiplier. When you’re constantly switching between multiple Git repositories, Kubernetes clusters, and Azure subscriptions throughout the day, having that contextual information immediately visible saves countless keystrokes and eliminates mental overhead.

This blog post shares my automated solution: a single PowerShell script that takes a bare Windows terminal and transforms it into a fully-configured, cloud native-ready development environment in under 5 minutes. Whether you’re setting up a new machine, rebuilding after a Windows update disaster, or just want to standardize terminal configuration across your team, this automation eliminates the manual work.

Before and After Terminal

Quick Start - Get Up and Running in 5 Minutes

Want to skip the details and just get started? Here’s everything you need to run the automation script:

Step 1: Enable Script Execution

Open PowerShell as Administrator and run:

1
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

When prompted, type Y and press Enter.

Step 2: Download and Run the Script

1
2
3
# Download and run the automation script
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Ricky-G/script-library/main/pimp-my-terminal.ps1" -OutFile "$env:TEMP\pimp-my-terminal.ps1"
& "$env:TEMP\pimp-my-terminal.ps1"

The script will automatically install:

  • ✅ Oh My Posh via winget
  • ✅ MesloLGM Nerd Font
  • ✅ Terminal-Icons PowerShell module
  • ✅ Cloud Native Azure theme
  • ✅ PSReadLine enhancements
  • ✅ Custom keyboard shortcuts

Step 3: Configure Your Terminal Font

After the script completes, configure your terminal font:

Windows Terminal:

  1. Open Settings (Ctrl + ,)
  2. Go to Profiles → Defaults → Appearance
  3. Set Font face to: MesloLGM Nerd Font
  4. Save and restart terminal

VS Code:

  1. Open Settings (Ctrl + ,)
  2. Search for “terminal font”
  3. Set Terminal › Integrated: Font Family to: MesloLGM Nerd Font

Done! Open a new terminal and enjoy your beautiful, cloud native-ready prompt.


Understanding Oh My Posh: The Modern Prompt Engine

Before diving into the automation, it’s worth understanding what Oh My Posh brings to the table and why it’s become the de facto standard for PowerShell prompt customization.

Read more
Automating Searchable Branch Configuration in Azure DevOps Repos via REST API

Automating Searchable Branch Configuration in Azure DevOps Repos via REST API


🎯 TL;DR: Bulk Configure Searchable Branches in Azure DevOps via Hidden Policy API

Azure DevOps code search only indexes the default branch (master/main) by default, causing issues when teams use develop branches for JFrog Artifactory detection scripts. Problem: No documented API exists for bulk updating searchable branches across thousands of repositories. Solution: Use the undocumented Policy Configuration API with policy type 0517f88d-4ec5-4343-9d26-9930ebd53069 to programmatically add branches to the searchable list. This approach leverages the same API calls the Azure DevOps UI uses internally, enabling automation of what would otherwise require manual configuration across massive repository collections.


Recently, I encountered an interesting challenge while working on a JFrog Artifactory adoption tracking project across a large Azure DevOps organization. The requirement was to scan repositories for JFrog URL references to determine which teams had successfully onboarded to their new artifact management system. The problem? Some development teams exclusively work in develop branches instead of master or main, and Azure DevOps code search only indexes the default branch by default.

This seemingly simple requirement - adding develop to the searchable branches for thousands of repositories - turned into a fascinating exploration of Azure DevOps’ undocumented APIs. While there’s no official documentation for bulk updating searchable branches, I discovered that the Azure DevOps UI uses a specific Policy Configuration API under the hood that we can leverage for automation.

This blog post shares a practical approach to programmatically configure searchable branches across large Azure DevOps organizations using REST APIs that Microsoft doesn’t officially document but absolutely supports.

The Challenge: Azure DevOps Code Search Limitations

Azure DevOps code search is a powerful feature, but it comes with a significant limitation that affects many organizations: by default, only the repository’s default branch (typically master or main) is indexed for search operations.

This creates problems in several scenarios:

JFrog Adoption Tracking: Organizations implementing JFrog Artifactory need to scan all repositories for configuration files and dependency references, but teams using feature branches or develop as their primary branch won’t be detected.

Multi-Branch Development: Teams practicing GitFlow or similar branching strategies may have critical code in develop, release/*, or feature branches that needs to be searchable.

Compliance and Security Scanning: Security tools and compliance scripts that rely on code search may miss important files if they’re not in the default branch.

Read more
Building Voice Agents with Azure Communication Services Voice Live API and Azure AI Agent Service

Building Voice Agents with Azure Communication Services Voice Live API and Azure AI Agent Service


🎯 TL;DR: Real-time Voice Agent Implementation

This post walks through building a voice agent that connects traditional phone calls to Azure’s AI services. The system intercepts incoming calls via Azure Communication Services, streams audio in real-time to the Voice Live API, and processes conversations through pre-configured AI agents in Azure AI Studio. The implementation uses FastAPI for webhook handling, WebSocket connections for bidirectional audio streaming, and Azure Managed Identity for authentication (no API keys to manage). The architecture handles multiple concurrent calls on a single Python thread using asyncio.

Implementation details: Audio resampling between 16kHz (ACS requirement) and 24kHz (Voice Live requirement), connection resilience for preview services, and production deployment considerations. Full source code and documentation available here


Recently, I found myself co-leading an innovation project that pushed me into uncharted territory. The challenge? Developing a voice-based agentic solution with an ambitious goal - routing at least 25% of current contact center calls to AI voice agents. This was bleeding-edge stuff, with both the Azure Voice Live API and Azure AI Agent Service voice agents still in preview at the time of writing.

When you’re working with preview services, documentation is often sparse, and you quickly learn that reverse engineering network calls and maintaining close relationships with product teams becomes part of your daily routine. This blog post shares the practical lessons learned and the working solution we built to integrate these cutting-edge services.

The Innovation Challenge

Building a voice agent system that could handle real customer interactions meant tackling several complex requirements:

  • Real-time voice processing with minimal latency
  • Natural conversation flow without awkward pauses
  • Integration with existing contact center infrastructure
  • Scalability to handle multiple concurrent calls
  • Reliability for production use cases

With both Azure Voice Live API and Azure AI Voice Agent Service in preview, we were essentially building on shifting sands. But that’s what innovation is about - pushing boundaries and finding solutions where documentation doesn’t yet exist.

Understanding the Architecture

Our solution bridges Azure Communication Services (ACS) with Azure AI services to create an intelligent voice agent. Here’s how the pieces fit together:

graph TB
    subgraph "Phone Network"
        PSTN[📞 PSTN Number
+1-555-123-4567] end subgraph "Azure Communication Services" ACS[🔗 ACS Call Automation
Event Grid Webhooks] MEDIA[🎵 Media Streaming
WebSocket Audio] end subgraph "Python FastAPI App" API[🐍 FastAPI Server
localhost:49412] WS[🔌 WebSocket Handler
Audio Processing] HANDLER[⚡ Media Handler
Audio Resampling] end subgraph "Azure OpenAI" VOICE[🤖 Voice Live API
Agent Mode
gpt-4o Realtime] AGENT[👤 Pre-configured Agent
Azure AI Studio] end subgraph "Dev Infrastructure" TUNNEL[🚇 Dev Tunnel
Public HTTPS Endpoint] end PSTN -->|Incoming Call| ACS ACS -->|Webhook Events| TUNNEL TUNNEL -->|HTTPS| API ACS -->|WebSocket Audio| WS WS -->|PCM 16kHz| HANDLER HANDLER -->|PCM 24kHz| VOICE VOICE -->|Agent Processing| AGENT AGENT -->|AI Response| VOICE VOICE -->|AI Response| HANDLER HANDLER -->|PCM 16kHz| WS WS -->|Audio Stream| ACS ACS -->|Audio| PSTN style PSTN fill:#ff9999 style ACS fill:#87CEEB style API fill:#90EE90 style VOICE fill:#DDA0DD style TUNNEL fill:#F0E68C

Core Components

  1. Azure Communication Services: Handles the telephony infrastructure, providing phone numbers and call routing
  2. Voice Live API: Enables real-time speech recognition and synthesis with WebRTC streaming
  3. Azure AI Agent Service: Provides the intelligence layer for understanding and responding to customer queries
  4. WebSocket Bridge: Our custom Python application that connects these services
Read more
Getting TFVC Repository Structure via Azure DevOps Server API

Getting TFVC Repository Structure via Azure DevOps Server API


🎯 TL;DR: Retrieving TFVC Repository Structure via REST API

This post demonstrates how to programmatically enumerate TFVC repository folders using Azure DevOps Server REST APIs. Unlike Git repositories, TFVC follows a one-repository-per-project model with hierarchical folder structures starting at $/ProjectName. The solution uses the TFVC Items API with specific parameters: scopePath=$/ProjectName to target the project root, and recursionLevel=OneLevel to retrieve immediate children. The implementation handles authentication via Personal Access Tokens, filters results to show only folders (excluding the root), and includes error handling for projects without TFVC repositories or insufficient permissions.

Key technical details: PowerShell script implementation, proper API parameter usage, authentication setup, and handling edge cases like empty repositories and access permissions. Complete PowerShell script and utilities available here


Recently, I was asked an interesting question by a developer who was struggling with Azure DevOps Server APIs around fetching repository metadata for legacy TFVC structures as part of a GitHub migration from ADO Server. This was a nice little problem to solve because, let’s be honest, we don’t really deal with these legacy TFVC repositories much anymore. Most teams have migrated to Git, and the documentation around TFVC API interactions has become somewhat sparse over the years.

The challenge was straightforward but frustrating: they could retrieve project information just fine, but getting the actual TFVC folder structure within each project? That’s where things got tricky. After doing a bit of digging through the API documentation and testing different approaches, I’m happy to say that yes, it is absolutely possible to enumerate all TFVC repositories and their folder structures programmatically.

This blog post shares the solution I put together - a practical approach to retrieve TFVC repository structure using the Azure DevOps Server REST APIs. If you’re working with legacy TFVC repositories and need to interact with them programmatically, this one’s for you.

The Challenge: Understanding TFVC API Limitations

Unlike Git repositories where each project can contain multiple repos, TFVC follows a different model where each project contains exactly one TFVC repository. This fundamental difference affects how you interact with the API and retrieve repository information.

The main challenge developers face is distinguishing between project metadata and actual TFVC repository structure. When calling the standard Projects API, you receive project information but not the folder structure within the TFVC repository itself.

Read more
How We United 8 Developers Across Restricted Environments Using Azure VMs and Dev Containers

How We United 8 Developers Across Restricted Environments Using Azure VMs and Dev Containers


🎯 TL;DR: Distributed Development with Azure VMs and Dev Containers

This post details solving a distributed development challenge where 8 developers from different organizations needed to collaborate on an AutoGen AI project - 4 from restricted corporate environments unable to install development tools, and 4 external developers without access to client systems. The solution uses a shared Azure VM (Standard D8s v3) with individual user accounts, certificate-based SSH authentication, and VS Code Remote Development connected to a shared Dev Container environment. The architecture eliminates “works on my machine” issues by providing consistent development environments, shared resources (datasets, models, configs), and enables real-time collaboration.

Implementation highlights: Automated user provisioning scripts, VS Code Remote-SSH configuration, comprehensive devcontainer.json with pre-installed Python 3.12/AutoGen/Azure CLI, shared directory structures, and security hardening with fail2ban and UFW. Development environment setup scripts and configurations documented here


Introduction: When Traditional Solutions Hit a Wall

Last month, I found myself facing a challenge that I’m sure many of you have encountered: How do you enable seamless collaboration for a development team when half of them work in a locked-down environment where they can’t install any development tools, and the other half can’t access the client’s systems?

Our team of eight developers was tasked with building a proof-of-concept (PoC) for an AI-powered agentic system using Microsoft’s AutoGen framework. Here’s the kicker: this was a 3-week PoC sprint bringing together two teams from different organizations who had never worked together before. We needed a collaborative environment that could be spun up quickly, require minimal setup effort, and allow everyone to hit the ground running from day one.

The project requirements were complex enough, but the real challenge? Four developers worked from a highly restricted corporate environment where installing Python, VS Code, or any development tools was strictly prohibited. The remaining four worked from our offices but couldn’t access the client’s internal systems directly.

We tried the usual approaches:

  • RDP connections: Blocked by security policies
  • VPN access: Denied due to compliance requirements
  • Local development with file sharing: Immediate sync issues and “works on my machine” problems
  • Cloud IDEs: Didn’t meet the client’s security requirements

Just when we thought we’d have to resort to the dreaded “develop locally and pray it works in production” approach, we discovered a solution that not only solved our immediate problem but revolutionized how we approach distributed development.

The Architecture That Worked For Us

Here’s a visual representation of what we built, everyone had to work on their personal (non-corporate) laptops for this to work.

flowchart TD
    A["� 8 Developers on Personal Laptops
4 Restricted + 4 External Teams"] B["� SSH + VS Code Remote Connection
Certificate-based Authentication"] C["☁️ Azure VM (Standard D8s v3)
8 vCPUs • 32GB RAM • Ubuntu 22.04"] D["👤 Individual User Accounts
user1, user2, user3... user8"] E["🐳 Shared Dev Container
Python 3.12 + AutoGen + Azure CLI
All Dependencies Pre-installed"] F["📂 Shared Development Resources
• Project Repository
• Datasets & Models
• Configuration Files"] G["✅ Results Achieved
94% Faster Onboarding
$400/month vs $16k laptops
Enhanced Security"] A --> B B --> C C --> D D --> E E --> F F --> G style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000 style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000 style C fill:#e1f5fe,stroke:#0277bd,stroke-width:3px,color:#000 style D fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000 style E fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000 style F fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000 style G fill:#e8f5e8,stroke:#388e3c,stroke-width:3px,color:#000

Lets check out how this was built and setup…

Read more
Custom Voices in Azure OpenAI Realtime with Azure Speech Services

Custom Voices in Azure OpenAI Realtime with Azure Speech Services


🎯 TL;DR: Hybrid GPT-4o Realtime with Azure Speech Services Custom Voices

This post demonstrates bypassing GPT-4o Realtime’s built-in voice limitations by creating a hybrid architecture that combines GPT-4o’s conversational intelligence with Azure Speech Services’ extensive voice catalog. The solution configures GPT-4o Realtime for text-only output (ContentModalities.Text) and routes responses through Azure Speech Services, enabling access to 400+ neural voices, custom neural voices (CNV), and SSML control. The implementation includes intelligent barge-in functionality using real-time audio amplitude monitoring, allowing users to interrupt the assistant naturally mid-response.

Technical implementation: C# application using Azure.AI.OpenAI and Microsoft.CognitiveServices.Speech SDKs, NAudio for audio I/O, streaming text collection from GPT-4o responses, RMS-based speech detection with configurable thresholds, and concurrent audio management for seamless interruption handling. Complete C# source code with audio helpers available here


Building realtime voice-enabled applications with Azure OpenAI’s GPT-4o Realtime model is incredibly powerful, but there’s one significant limitation that can be a deal-breaker for many use cases: you’re stuck with OpenAI’s predefined voices like “sage”, “alloy”, “echo”, “fable”, “onyx”, and “nova”.

What if you’re building a branded customer service bot that needs to match your company’s voice identity? Or developing a therapeutic application for children with autism where the voice quality and tone are crucial for engagement? What if your users need to interrupt the assistant naturally, just like in real human conversations?

In this comprehensive guide, I’ll show you exactly how I solved these challenges by building a hybrid solution that combines the conversational intelligence of GPT-4o Realtime with the voice flexibility of Azure Speech Services. We’ll dive deep into the implementation, covering everything from the initial problem to the complete working solution.

flowchart TD
    A[👤 User speaks] --> B[🎤 Microphone Input]
    B --> C{Barge-in Detection
Audio Level > Threshold?} C -->|Yes| D[🛑 Stop Azure Speech] C -->|No| E[📡 Stream to GPT-4o Realtime] E --> F[🧠 GPT-4o Processing] F --> G[📝 Text Response
ContentModalities.Text] G --> H[🗣️ Azure Speech Services
Custom/Neural Voice] H --> I[🔊 Audio Output] D --> E I --> J[👂 User hears response] J --> A style A fill:#e1f5fe style D fill:#ffebee style G fill:#f3e5f5 style H fill:#e8f5e8 style I fill:#fff3e0
Read more
Ignoring Azurite Files

Ignoring Azurite Files


🎯 TL;DR: Managing Azurite Storage Emulation Files in VS Code

Local development with Azure Functions often requires Azurite (Azure Storage Emulator replacement) which generates storage files that clutter VS Code workspace. Problem: __azurite__, __blobstorage__, and __queuestorage__ directories appear in project explorer making navigation difficult. Solution: Configure VS Code files.exclude settings to hide these emulation artifacts while preserving their functionality for local development and testing.


In the old days, developers relied on the Azure Storage Emulator to emulate Azure Storage services locally. However, Azure Storage Emulator has been deprecated and replaced with Azurite, which is now the recommended way to emulate Azure Blob, Queue, and Table storage locally. In this post, let’s see how to set up exclusions in Visual Studio Code to prevent unwanted Azurite files from cluttering your workspace while working with Function Apps.

Azurite files

Read more
Extracting GZip & Tar Files Natively in .NET Without External Libraries

Extracting GZip & Tar Files Natively in .NET Without External Libraries


🎯 TL;DR: Native .tar.gz Extraction in .NET 7 Without External Dependencies

Processing compressed .tar.gz files in Azure Functions traditionally required external libraries like SharpZipLib. Problem: External dependencies increase complexity and security surface area. Solution: .NET 7 introduces native System.Formats.Tar namespace alongside existing System.IO.Compression for GZip, enabling complete .tar.gz extraction without external dependencies. Implementation uses GZipStream for decompression and TarReader for archive extraction with proper entry type filtering and async operations.


Introduction

Imagine being in a scenario where a file of type .tar.gz lands in your Azure Blob Storage container. This file, when uncompressed, yields a collection of individual files. The trigger event for the arrival of this file is an Azure function, which springs into action, decompressing the contents and transferring them into a different container.

In this context, a team may instinctively reach out for a robust library like SharpZipLib. However, what if there is a mandate to accomplish this without external dependencies? This becomes a reality with .NET 7.

In .NET 7, native support for Tar files has been introduced, and GZip is catered to via System.IO.Compression. This means we can decompress a .tar.gz file natively in .NET 7, bypassing any need for external libraries.

This post will walk you through this process, providing a practical example using .NET 7 to show how this can be achieved.

.NET 7: Native TAR Support

As of .NET 7, the System.Formats.Tar namespace was introduced to deal with TAR files, adding to the toolkit of .NET developers:

  • System.Formats.Tar.TarFile to pack a directory into a TAR file or extract a TAR file to a directory
  • System.Formats.Tar.TarReader to read a TAR file
  • System.Formats.Tar.TarWriter to write a TAR file

These new capabilities significantly simplify the process of working with TAR files in .NET. Lets dive in an have a look at a code sample that demonstrates how to extract a .tar.gz file natively in .NET 7.

Read more