.NET

How to add Microsoft.Extensions.DependencyInjection to OWIN Self-Hosted WebApi

October 17, 2016 .NET, HowTo, WebApi 1 comment

I could not find an example showing how to use Microsoft.Extensions.DependencyInjection as IoC in OWIN Self-Hosted WebApi, as a result here is this blog post.

Let’s imagine that you have WebApi that you intend to Self-Host using OWIN. This is fairly easy to do. All you will need to do is to use Microsoft.Owin.Hosting.WebApp.Start method and then have a bit of configuration on IAppBuilder (check out ServiceHost.cs and WebApiStartup.cs in the gist below).

It becomes a bit more complicated when you want to use an IoC container, as OWIN’s implementation takes care of creating Controller instances. To use another container you will need to tell the configuration to use implementation of IDependencyResolver (see WebApiStartup.cs and DefaultDependencyResolver.cs). DefaultDependencyResolver.cs is a very simple implementation of the resolver.

In case you are wondering what Microsoft.Extensions.DependencyInjection is. It is nothing more than a lightweight DI abstractions and basic implementation (github page). It is currently used in ASP.NET Core and EF Core, but nothing prevents you from using it in “normal” .NET. You can integrate it with more feature-rich IoC frameworks. At the moment it looks like Autofac has the best integration and nice documentation in place. See OWIN integration.

Rest is just usual IoC clutter. See the gist

I hope this blog post helps you with integrating Microsoft.Extensions.DependencyInjection in WebApi hosted via OWIN.


1 comment


Book Review: C# in Depth

November 27, 2015 .NET, Book Reviews, C# No comments

C# in Depth, 3rd EditionI’ve been writing C# code for more than 10 years by now and yet had a lot to learn from the book.

The book is written by Jon Skeet (the guy Number One at StackOverflow) and is a purely about the C# language. It distances itself from the .NET framework and libraries supplementing the framework.

Structure of the book follows C# language versions. This isn’t particularly useful if you are trying to learn the language by reading this book after some other introductory book. But for software professionals, though, it could be a joy, since it takes you through your years of experience with the language. It tends to remind you about times when something wasn’t possible and later it became possible. Kind of nostalgia, I would say.

First chapters could be somewhat boring. I read the book cover to cover, since I didn’t want to miss on anything, but probably it isn’t the best way to read the book.

Jon is very pedant when it comes to defining anything. This, of course, is double sided: very good when you have strong knowledge and understand components of the definition, but, unfortunately, it complicates understanding. There were some places in the book which I had to read few times to completely understand. Just for instance, in a chapter about Covariance and Contravariance:

[…] the gist of the topic with respect to delegates is that if it would be valid (in a static typing sense) to call a method and use its return value everywhere that you could invoke an instance of a particular delegate type and use its return value, then that method can be used to create an instance of that delegate type.

Jon Skeet. C# in Depth. Kindle Edition.

On the other hand, I found it very important that details are not omitted. I was reading the book for the depth of it. So details and preciseness is exactly what I was expecting, even though they come with the price of slower comprehension.

You may have different background than I do, but if you are a .NET developer with some years of experience, chances are the only chapters with new and difficult information will be those that are not reflecting everyday practical usage of C# language. For example, all of us use LINQ these days, but very few of us would need to know how it works internally or need to implement their own LINQ provider. Very few of us would dig into DLR or how async/await is working.

Almost in each and every chapter there was something where I could add to my knowledge. For me the most important chapter to read was “Chapter 15. Asynchrony with async/await”, since I have very limited experience in this feature.

Conclusion

In my opinion, the best two books ever for C# programmers are “CLR via C#” and “C# in Depth”, meaning that I just added one to this list.

“C# in Depth” is a great, precise, and thorough book to complement and enrich your knowledge.

I highly recommend reading it.



No comments


What is the point of the ‘event’ keyword in C#?

October 31, 2015 .NET, C# No comments

Do you know what a delegate and event are in C#? Can you clearly explain in one-two sentences what the difference between these two is?

It might sound simple as we, .net developers, use these quite frequently. Unfortunately it isn’t that straight forward, especially when you want to be precise. Just try it now aloud.

First difficulty is when you try to differentiate between the delegate type and an instance of a delegate. Next difficulty is when you try to explain events. Your explanation may mention subscribing and unsubscribing to an event. But, hey, you can do exactly the same with a delegate instance by using exactly same “+=” and “-=”. Have you thought about this? The real difference is that with an exposed event it’s about all your external code can do. Instead with an exposed delegate instance external code can do other things like changing the whole invocation list by using “=” or invoking a delegate right away.

An event is nothing more than encapsulation convenience over delegate provided by C# language.

On a conceptual level, an event is probably much more than just some convenience, but that’s beside the point in this post.

To feel the difference all you need is to write some code. Nothing helps to understand things better than writing code and reading quality resources.

Please see below my try on understanding the difference between events and delegates. I added plenty of comments in a try to be explanatory.

Click here to see the gist

Person’s sickness event triggering is responsibility of a person’s internals (stomach, in this example) and it is correct to be encapsulated in the Person’s class, otherwise external code would be able to make a person sick (I like this double-meaning).

Next step is to understand what C# compiler generates when you use the ‘event’ keyword and how else you can declare an event other than in a field-like style. I don’t describe it in this post, I’ve only read about these details, but they are quite interesting as well.

Proficiency in programming language comes with a deep understanding of the basics. I’m proving this to myself every now and then.


No comments


Load Test use case requiring plugins and synchronous runs for same data

October 5, 2015 .NET, Performance, VS No comments

Load testing is a great way of finding out if there are any performance issues with your application. If you don’t know what a load test in VS is, please read this detailed article on MSND on how to use it.

What we want to load test

I have experience with creating load tests for high load web services in entertainment industry. At the moment I’m working on internal web application in which users exclusively “check out” and “check in” big sets of data (I will call them “reports” here). In this post I want to describe one specific use case for load testing.

In our application only one user can work at a time with a particular report. Because of this, load tests cannot be utilizing simulation of multiple users for same data. Basically, for one particular user and report, testing has to be synchronous. User finds a report, checks it out, checks it in and so on for the same report. We want to create multiple such scenarios and run them in parallel simulating same user working on different sets of data. Can be extended to really simulate multiple users if we implement some windows impersonation as well.

There are other requirements to this performance testing. We want to quickly switch web server we are running this tests against. This also means that database will be different, therefore we cannot supply requests with hardcoded data.

My solution

I’ve started with creating a first web performance test using web browser recording. It records all requests with the server in IE add-on. I recorded one scenario.

Depending on your application you might get a lot of requests recorded. I only left those that are most important and time consuming. One of the first things you want to do is to “Parameterize Web Servers…”. This will extract your server name into separate “Context Parameter” which you can later easily change. Also in my case, most of the requests are report specific, so I added another parameter called “ReportId” and then used “Find and Replace in Request…” to replace concrete id with “{{ReportId}}” parameter.

WebTestContextMenu

Recorder obviously records everything “as is” by embedding concrete report’s json into “String Body”. I want to avoid this by extracting “ReportJson” into a parameter and then using it in PUT requests. You can do this using “Add Extraction Rule…” on GET request and specify that you want to save response into a parameter. Now you can use “{{ReportJson}}” in String Body of put requests. As simple as that.

image

Unfortunately, not everything is that straight forward. When our application does a PUT request it generates a correlationId that is later used to update client on the processing progress. To do custom actions you can write plugins. I’ve added two of them. One is basically to skip some of the dependant requests (you can use similar to skip requests to external systems referenced from you page).

The other plug-in I’ve implemented is to take parameter “ReportJson” and update it with new generated correlationId. Here it is:

using System;
using Microsoft.VisualStudio.TestTools.WebTesting;

namespace YourAppLoadTesting
{
    public class AddCorrelationIdToReportPutRequest: WebTestPlugin
    {
        public string ApplyToRequestsThatContain { get; set; }

        public string BodyStringParam { get; set; }

        public override void PreRequest(object sender, PreRequestEventArgs e)
        {
            if (e.Request.Url.Contains(ApplyToRequestsThatContain) && e.Request.Method == "PUT")
            {
                var requestBody = new StringHttpBody();
                requestBody.ContentType = "application/json; charset=utf-8";
                requestBody.InsertByteOrderMark = false;
                requestBody.BodyString = e.WebTest.Context[BodyStringParam].ToString()
                    .Replace("\"correlationId\":null",
                        string.Format("\"correlationId\":\"{0}\\\\0\"", Guid.NewGuid()));
                e.Request.Body = requestBody;
            }
            base.PreRequest(sender, e);
        }
    }
}

In pluging configuration I set ApplyToRequestsThatContains to “api/report” and also BodyStringParam to “ReportJson”.

To finish with my load test all I had to do is to copy-paste few of this webtests and change ReportIds. After that I added webtests to a load test as separate scenarios making sure that every scenario has constant load of only 1 user. This makes sure that each scenario runs synchronous operations on individual reports.

image

I was actually very surprised how flexible and extensive load tests in VS are. You can even generate code from your webtest to have complete control over your requests. At least I would recommend you to generate code at least once to understand what it does under the hood.

I hope this helps someone.


No comments


Edge.js integration into C# project to run some JavaScript on the server

January 23, 2015 .NET, JavaScript 1 comment

Edge.js allows to run JavaScript from C# in .NET environment. And other way around – running C# from JavaScript executing in Node.js process. There is nice documentation available on Edge.js web site. Here I would like to point out a few of the issues that you might stumble upon when integrating Edge.js into your C# project.

A bit of a story on why we are moving from Jurassic to Edge.js

In a project I’m working on we have plenty of shared logic that we want to execute both on client and server. We decided that we want it to be written in JavaScript. As our backend is .NET we used Jurassic library to compile JavaScript code on the server and then to execute it whenever we needed it. We used Jurassic library for quite some time and it worked fine. It is a bit slow on doing initial compile of all of our JS code, so our startup time lagged (+30 sec or so). Execution time also wished to be better. But recently we started to get crashes that seemed to be from the library itself, but it is a real pain to find out what was the problem since library doesn’t provide JS stack trace.

How to integrate Edge.js in C# project

If everything goes smoothly all you need to do is to reference EdgeJs.dll through NuGet and execute your node script using Edge.Func. But life is not that easy.

I would recommend to create minimalistic project with basic things that are used in the project you are going to integrate Edge into. This will allow you to save time on testing how integration works. You can start with hello world available on Edge.js page. I’ve also created simple project with few things I wanted to test out, mainly loading of modules, passing-in and getting results.

Here below are three files from my sample project. It should be easy to follow the code plus there are some comments just below the code.

In Program.cs you can see how easy it is to invoke Edge and pass in a dynamic object. Getting result back is also super easy. If you are working with large objects you might want to convert the object to JSON and then pass it into your JavaScript. This is what we do using Newtonsoft.Json.

Edge.Func has to accept a function of a specific signature with a callback being called. Alternatively you can have a module that itself is a function and then load it. Exactly what I did with module edgeEntryPoint.js. To do something useful, I’m trying to dynamically find an object ‘className’ and call a function ‘functionName’ on it plus I send some other parameters. This is somewhat similar to what we want to achieve in our project.

calc2D.js is a module with logic that I’m trying to call. There is one interesting thing about it. It is the way how it exposes itself to the world. It checks if there is ‘window’ and assigns itself to it or otherwise assigns itself to ‘GLOBAL’ so this module with work fine both in browser and in Node.js.

Few things that could go wrong

Signing

It is very likely that you are signing your assemblies therefore you will get compile error telling you:

Assembly generation failed — Referenced assembly ‘Edge’ does not have a strong name.

Don’t panic. You can still sign Edge.dll. There are few ways of doing it. You can even rebuild Edge source code with your key.snk or what I find more easier is to disassemble, rebuild and sign from VS console using 2 commands:

ildasm /all /out=Edge.il Edge.dll

ilasm /dll /key=key.snk Edge.il

window is not defined

In case if you want to run same JavaScript both on client and on server you might run into issues of namespacing and global variables. Above I’ve already demonstrated how we exposed our own namespace by checking if ‘window’ is defined. If you are trying to load a lot of modules you might want to assign them to global variables before your callback function returns. For example:

moment = require(‘../../moment.js’);

If you still get error ‘window is not defined’ it could be that some internal logic is relying on global ‘window’ being defined somewhere. In order to fix it you can just have this kind of a hack in front of your entry function:

GLOBAL.window = GLOBAL;

Edge.Func hangs

There is already issue reported on this on GitHub edge#215, which I believe is similar to what we are experiencing (if not exactly same).

In our case we want to run JavaScript from .NET that runs in IIS. I created wrapper around Edge that keeps compiled function, and then Invoking is done whenever we need it. One interesting thing started to happen when I had web site pointing to bin folder. After rebuilding the app initialization would hang on Edge.Func. If I kill the w3wp.exe process and start the app from scratch all works just fine. Unfortunately I was not able to reproduce this with console application. I suspect that this has something to do with how IIS threads run and possibly locks files, but still if I locked node.dll, double_edge.js and edge.node files for console app it was not reproducible.
Issue was solved by ensuring that IIS is stopped before building the web project. This can be achieved by using commands ‘iisreset /stop’ and ‘iisreset /start’ in BeforeBuild and AfterBuild tags of your csproj file.
I don’t think that this is the best way to solve this or that running Edge.js in IIS is very reliable, so I spent a bit of time reading source code of Edge.js and debugging it. And to be honest I don’t quite understand all the things around invoking native code to make all this magic happen. Because of this I’m not 100% sure that entire solution with Edge.js will prove itself to be a decent one.

All in all Edge.js is a really great tool. Not so long ago things it does would be unbelievably difficult to achieve.

I hope this post is of some help to you.


1 comment


How to consume WCF service in .NET

May 27, 2013 .NET, WCF No comments

What? I must be kidding. This is not blog for kids trying to play with .NET. Every professional .NET developer knows how to consume WCF. Don’t they? There is nothing more easier than that.

Well, not that long ago I realized that the way I like to consume WCF services is not 100% correct.

What I like to do is use of “using”:

using (var client = new SomeServiceClient())
{
  var response = client.SomeServiceOperation(request);
  //return or do something
}

While this looks nice, here is thing which even kids won’t like: Dispose method for the client is not really implemented correctly by Microsoft! It could throw an exception if there is network problem therefore masking other exceptions that could have happened in between. You can understand the issue better if you have a look at WCF samples (WF_WCF_SamplesWCFBasicClientUsingUsing).

MS proposes their own solution (read it here):

var client = new SomeServiceClient();
try
{
    var response = client.SomeServiceOperation(request);
    // do something
    client.Close();
}
catch (Exception)
{
    client.Abort();
    throw;
}

While this is correct way it is too much code, especially if you put catch blocks for Communication and Timeout exceptions as recommended by MS. Guys over internet propose other solutions, like wrapping the call or extension methods.

Here is my solution, which is nothing new, but just slightly modified version of best proposed answer on SO:

Elegant example of usage with return statements:

return Service<ISomeServiceChannel>.Use(client =>
{
    return client.SomeServiceOperation(request);
});

And the solution itself:

public static class Service<TChannel>
{
    public static ChannelFactory<TChannel> ChannelFactory = new ChannelFactory<TChannel>("*");

    public static TReturn Use<TReturn>(Func<TChannel, TReturn> codeBlock)
    {
        var proxy = (IClientChannel)ChannelFactory.CreateChannel();
        var success = false;
        try
        {
            var result = codeBlock((TChannel)proxy);
            proxy.Close();
            success = true;
            return result;
        }
        finally
        {
            if (!success)
            {
                proxy.Abort();
            }
        }
    }
}

And some bitterness for the end. It doesn’t look like Microsoft is in a hurry to fix Dispose while they should accordingly to their own guidelines. But even knowing this I still like “using” and will probably be stick to it for smaller things. You see, my problem is that I have never-ever experienced inconveniences or issues because of this.

Is it same for you or do you have a story to share with me/others in your comment? :)


No comments


GMT vs. UTC and Time Zones in .NET

May 26, 2013 .NET, C# No comments

Recently I had correspondence with HR from Great Britain. In one of the e-mails I wrote “16:00 CEST (UTC+2) would work for me” which confused the guy, so he asked what GMT UK time it is. If I was too much pedantic I could have replied that GMT is ~same as UTC and that in GMT it is 14:00. But of course it would confuse him even more. I replied with “3PM” as it was corresponding time in UK.
Problem is that 3PM (or 15:00) is not GMT time, but rather “GMT Standard Time + Daylight Saving Time” and when you look it up it is better called BST – British Summer Time. On the other hand I also understand that for many people it is much easier to refer to their time zone as GMT (maybe for historical reasons).
I wouldn’t write this blog post if confusion was only in heads of ordinary people. Unfortunately big enterprises and small startups still frequently fail to write proper time zone handling in their applications.
Lets start with this: We no longer argue whether Earth is orbiting Sun or the other way around, the same way we no longer depend on some astronomical events to measure time, although none would like if at 1PM it was night. There is atomic clock invented which is ultimate truth of time, but since Earth is not spinning that precisely Coordinated Universal Time (UTC) has been developed. GMT is historical term and it would be great to leave it aside.
If you want to understand modern time measurements I would strongly recommend to read these two articles: Coordinated Universal Time (UTC) Explained and GMT and Other Time Systems Explained.
Now bit more for not so ordinal people
When it comes to source code things are not getting much cleaner. Answer to question “Difference between UTC and GMT Standard Time in .NET” at SO:
GMT does not adjust for daylight savings time. You can hear it from the horse’s mouth on this web site. Add this line of code to see the source of the problem:
Console.WriteLine(TimeZoneInfo.FindSystemTimeZoneById("GMT Standard Time").SupportsDaylightSavingTime);

 

Output: True.
This is not a .NET problem, it is Windows messing up. The registry key that TimeZoneInfo uses is HKLMSOFTWAREMicrosoftWindows NTCurrentVersionTime ZonesGMT Standard Time
You’d better stick with UTC.
And I partially agree that it is Windows messing up for few reasons. If you look at corresponding to registry entry “GMT Standard Time” UI:

image 

you will notice that it is displayed as UTC, but it is also DST adjustable. Now when you read this MSDN page about TimeZoneInfo, you will get an impression that “standard times” are not DST adjustable, but when converting in .NET it actually takes daylight into account. Concluding I’m afraid that when dealing with time zone conversions we have to be very cautious.

Conversion itself can be done utilizing class TimeZoneInfo (same link again):

 

TimeZoneInfo GMT = TimeZoneInfo.FindSystemTimeZoneById("GMT Standard Time");
DateTime postDateUtc = TimeZoneInfo.ConvertTimeToUtc(dbRecord.PostDateInLocalLondon, GMT);

And please be careful. You would need some exception handling and be ready for not-existing or ambiguous time. For example, see how this:

TimeZoneInfo GMT = TimeZoneInfo.FindSystemTimeZoneById("GMT Standard Time");
DateTime notExistingInGmtTime = new DateTime(2013, 3, 31, 1, 30, 0, DateTimeKind.Unspecified);
DateTime dateInUtc = TimeZoneInfo.ConvertTimeToUtc(notExistingInGmtTime, GMT);

would throw an exception. Reason is that there is no such time as 1:30AM on 31 March 2013 in London.

How to work with Time, Dates and Time Zones in .NET
Everything starts with basics. If you get them wrong how do you expect to write quality code? It is not a shame to read again about DateTime or even Boolean. Deep inside there is always something hidden – it just depends on depth.
In my career I’ve experienced quite many (as for such fundamental concept) different problems and inconveniences because of incorrect handling of time.
My rules of thumb:
  • Always persist date and time in UTC, otherwise save offset, but never-ever local time
  • Use DateTimeOffset – it is DateTime v2, otherwise at least make sure to avoid DateTime pitfalls
  • Be explicit about TimeZone used, in APIs I would also call date&time fields with suffix “Utc” (ugly?)
  • Convert to local time just before displaying
  • … and check for best practices over internet
Before finishing this post I would like to mention I have a task for myself as well: exploring Noda Time. If you haven’t read What’s wrong with DateTime anyway? you should!
… and, sorry but had to copy one final issue
If it is Coordinated Universal Time, why is the acronym UTC and not CUT? When the English and the French were getting together to work out the notation, the french wanted TUC, for Temps Universel Coordonné. UTC was a compromise: it fit equally badly for each language. :)


No comments