Modern .NET vs Legacy .NET Framework
The first decision an AI code generation pipeline must make in a .NET codebase is which generation it is dealing with: modern .NET (8/9/10), .NET Standard, or legacy .NET Framework 4.x. The patterns are different enough that mixing them produces broken code.
In our deployment data, modern ASP.NET Core has the highest first-time AI acceptance rate of any .NET flavor — comparable to Go. Legacy .NET Framework has the lowest, slightly above COBOL. Both are addressable, but they need different config.
This post covers both, and where the validation pipeline should hard-stop the AI.
ASP.NET Core: Patterns That Work
- New
Controllerwith action method, request DTO, response DTO, validation attributes. Mechanical. - Minimal API endpoints with
MapGet/MapPost. AI generates these cleanly. - New EF Core entity with
DbContextregistration and migration. Rundotnet ef migrations addin the validation pipeline to verify. - Adding a new service to DI with the right lifetime (
AddScoped/AddSingleton/AddTransient). AI agents need a per-repo hint about the dominant pattern. - Adding
IOptionsconfiguration binding for a new section. Templated. - xUnit tests with FluentAssertions. AI agents produce idiomatic test code.
ASP.NET Core: Patterns That Fail
- Custom middleware ordering. When the order of middlewares matters (auth before authorization, exception handling before logging), AI agents sometimes shuffle them.
IAsyncEnumerablestreaming. AI agents will collect to a list and lose the streaming semantic.ActivitySource/ OpenTelemetry instrumentation. AI sometimes adds redundant spans or misses required ones.HostedServicebackground workers. AI generates these but often misses the cancellation propagation.
EF Core Performance
The N+1 query problem in EF Core is the same risk as in Java/JPA. The AI writes a LINQ query that triggers lazy loading inside a loop, and production performance degrades.
Mitigations:
- Disable lazy loading at the DbContext level. Forces explicit
Includecalls. AI agents handle this well when the pattern is enforced. - Run the query log in tests. Assert that the query count for a given operation is bounded.
- Per-repo PerformanceAgent rule. Flag any LINQ enumeration inside a loop against tracked entities.
EF Core also has a sneaky failure mode: AI agents will sometimes use AsNoTracking() inappropriately on entities that get updated in the same transaction. The fix is in the test layer — assert that the change is persisted.
Async Discipline
C# has the best async story in mainstream languages, but it is also the area where AI agents make the most C#-specific mistakes:
- Sync-over-async.
.Resultor.Wait()deadlocks. AI agents trained on older C# do this. Per-repo lint rule (AsyncFixer) catches it. async voidoutside event handlers. Crash risk. Lint rule catches.- Missing
ConfigureAwait(false)in library code. Less critical in ASP.NET Core (no SynchronizationContext) but matters in shared libraries. - Cancellation token propagation. AI agents drop
CancellationTokenarguments. Per-repo rule that all async methods must accept and propagate cancellation tokens.
The C# compiler catches a lot, but not these. They need lint rules in the validation pipeline.
Legacy .NET Framework
When the codebase targets .NET Framework 4.x, the AI faces three difficulties:
- Different package management.
packages.configinstead of PackageReference, sometimes mixed. - Different concurrency primitives.
Task.Runpatterns from beforeasync/awaitwas widespread. - Web Forms or older MVC. AI agents are weak here because training data is sparse.
The most valuable AI work in legacy .NET codebases is modernization: moving Web Forms code to controllers, moving packages.config to PackageReference, replacing custom concurrency with async/await. The migration is well-defined enough for the AI to do it incrementally.
What we do not recommend is letting the AI write new features in legacy .NET Framework code. The pattern matching is too weak. Modernize first, then automate.
Test Patterns
xUnit + FluentAssertions + Moq is the modern stack. NUnit + Shouldly is also common.
- AI agents produce idiomatic xUnit tests with
Theory/InlineDataparametrization. - FluentAssertions
.Should()chains are obvious to the AI. - Moq's setup-verify pattern needs explicit per-repo guidance — over-mocking is a common AI failure.
- Snapshot testing (
Verify) works well for AI when the snapshot pattern is established.
Source Generators and Roslyn Analyzers
Modern .NET uses source generators heavily (System.Text.Json, Microsoft.Extensions.Logging, custom domain-specific generators). AI agents need to know:
- Don't write code that the generator will produce.
- Update the generator input (e.g., add the partial class declaration), not the generated output.
- Run the build before judging completeness — the generator runs at build time.
Per-repo config: list the generators in use. The AI uses this to avoid duplicating generated code.
NuGet and Package Management
NuGet is AI-friendly. PackageReference updates are mechanical. Major-version bumps that involve API changes need confidence-threshold routing — same as Maven in Java, same as go.mod in Go.
For solution files (.sln), AI agents sometimes corrupt them. Validation rule: if the .sln file changes, run dotnet build on the full solution. If build fails, reject the change.
Multi-Project Solution Patterns
Large .NET solutions have 20-100+ .csproj files. AI agents need:
- Awareness of the solution structure (
dotnet sln list). - Awareness of project-to-project references.
- Project-scoped builds (don't rebuild the solution for a one-project change).
EnsureFix's planner reads the solution file and scopes work to affected projects. Without scoping, large solutions become too slow and expensive for AI iteration.
Cost Economics
.NET cost per ticket sits between Go (lower) and Java (higher). Modern ASP.NET Core ships fast. Legacy .NET Framework costs more because more iterations are needed.
For deeper context on cost structure, see the [pricing page](/pricing) and [ROI for a 50-engineer team](/blog/ai-code-generation-roi-50-engineer-team).
Where to Start
For a team introducing AI to a .NET codebase:
- Test backfill in well-isolated services. Lowest risk, highest learning.
- Dependency bumps. Mechanical, low judgment, high frequency.
async/awaitdiscipline cleanup. Cancellation token propagation, sync-over-async fixes.- EF Core query performance audits. AI flags risky patterns; human reviews and approves the fix.
- Migration tickets. Web Forms → MVC, .NET Framework → modern .NET,
packages.config→ PackageReference.
Earn trust on these before letting the AI touch new feature work.
Summary
.NET and C# are strong AI generation targets if the codebase is on modern .NET, has consistent async discipline, controls EF Core query patterns, and provides per-repo configuration about source generators and DI conventions. Legacy .NET Framework is best treated as a modernization target — let the AI move it forward, not write new features inside it.
For the cross-cutting validation pattern that catches .NET-specific failure modes, see [enterprise safety layers](/blog/enterprise-safety-ai-generated-code).
Ready to automate your tickets?
See ensurefix process a real ticket from your backlog in a live demo.
Request a Demo