Opinion

Do not misuse or over abstract AutoMapper

November 27, 2016 AutoMapper, Opinion 14 comments

AutoMapper is a great little library every .NET project is using (well, lots of them). I used it for the first time in 2010 and wrote a blog post about it.

Since that time I observed few things:

  • Almost every project I worked on, that needed some kind of object mapping, was using this lib. In rare cases there was some pet library or manual mapping in place.
  • Almost every project had some abstraction over the library like if it was going to be replaced or like different implementation for mapping would be needed.
  • Basic API of the library didn’t change at all. CreateMap and Map are still there and work the same. At the same time performance, testability, exception handling, and feature richness got improved significantly. Last one, in my opinion, is not such a good thing as it leads to the next point.
  • In many of those projects AutoMapper was simply misused as code placed in AfterMap or in different kinds of resolvers would simply start containg crazy things. In worst of those cases actual business logic was written in resolvers.

I have always been of an opinion:

Less Code – Less Bugs; Simple Code – Good Code.

Having seen this trend with the library, I would like to suggest simplifying its usage by limiting ourselves. Simply:

  • Use AutoMapper only for simple mapping. Basically, one property to one property. Preferably, majority of property mapping is done by the same name. If you find yourself in situation when over half of your mappings are specified explicitly in ForMember method it may be the case for doing it manually (at least for the specific type) – it will be cleaner and less confusing.
  • If you have some logic to add to you mapping, do not add it via AutoMapper. Write a separate interface/class and use it (via DI) where your logic has to be applied. You will also be able to test it nicely in this way.
  • Do not abstract AutoMapper behind interfaces/implementations. I’ve seen abstracting this in a way that you need to create a class (empty in many cases) for each mapping type pair and somewhere there would be custom reflection code that initializes all of the mappings. Instead, use built-in AutoMapper Profile class and Mapper.Initialize method. If you still want to have at least some abstraction to avoid referencing AutoMapper everywhere make it simple.

Here is how I’m using AutoMapper these days:

Somewhere in CommonAssembly a very-very simple abstraction (optional):

Somewhere in BusinessLogicAssembly and any other where you want to define mappings (can be split in as many profiles as needed):

Somewhere in startup code in BootstrappingAssembly (Global.asax etc):

And here is the usage:

That’s it. I do not understand why some simple things are made complex.

There is also another advantage of keeping it minimalistic – maintainability. I’m working on a relatively new project that was created from a company’s template, as a result it had older version of AutoMapper abstracted. To upgrade it and keep all old interfaces would mean some work as abstraction used some of the APIs that did change. Instead I threw away all of these abstractions and upgraded the lib. Next time upgrading there simply will be way less code to worry about.

Please let me know if you share the same opinion.


14 comments


Single Git Repository for Microservices

November 14, 2016 Opinion No comments

Just recently I joined a team. We write intranet web application. There is nothing too special about it, except that it was designed to be implemented as micro-services and as de-facto at the moment it is a classical single .NET MVC application. This happened for a simple reason: meeting first release deadline.

The design was reflected in how source control was set up: one git repository per each service. Unfortunately this caused a number of required maneuvers to be in synch and to push changes as team was making scattering changes in multiple repositories. This also made it more difficult to consolidate NuGet packages and other dependencies as all of them were in different repositories.

I think that microservices and corresponding hard reflection of their boundaries in form of source code repositories should evolve naturally. Starting with a single repository sounds more reasonable. If you keep the idea of microservices in you head and nicely decouple your code nothing stops you creating new repositories as you service boundaries start to make shape.

Taking this into account we merged repositories into one. There was only question of keeping source code history. Turns out the history can be easily preserved by employing git subtree command and placing all of the service repositories as subdirectories of a new single repository.

As a result, team is working much more effectively as we do not waste time on routine synch and checking who did what where. 

Conclusion: Theoretically micro-services should be implemented in their own repositories. That’s true, but in practice for relatively small and new project, with only one team working on it, single repository wins.


No comments


Running Challenge Completed. Health, Motivation, and Other Aspects Of Running

October 2, 2016 Opinion No comments

Are you bent by scoliosis programmer? Do you spend too much time sitting? Maybe you look a bit flabby? Super tired at and after work? High chances are that some of these are true for you. Some certainly are true for me. This year I tried to improve my situation by running. Here are my thoughts and humble recommendations.

Dirty Running Shoes

Running and its impact on your Health

“Being physically active reduces the risk of heart disease, cancer and other diseases, potentially extending longevity.” – many studies show accordingly to this article. Probably one of the most accessible forms of exercising is running. All you need to start is just pair of shoes. This research paper is a good resource on learning about impact of running and other exercises on chronic diseases and general mortality.

So why running? – Because it is easy to start with and because we are made for it. “Humans can outrun nearly every other animal on the planet over long distances.” – says this article. Funnily enough there is yearly Human vs. Horse marathon competition.

If you ask yourself how you want your last two decades before death look like, most likely you would picture a healthy, mobile, and socially active person. Also you would prefer those two decades better be 80’s and 90’s. Right? Light running for as little as 1 hour a week could add as much as 6 years to your life. This long-term study showed that “the age-adjusted increase in survival with jogging was 6.2 years in men and 5.6 years in women.” (For those who are pedantic and want to know net win: (2 hrs/wk * 52 wks a year * 50 years) / 16 day hours = 325 days lost to running still gives you net 5 years).

Motivation

At my age I do not think about the death that much. My main motivation for running is improvement of my health. I know that for many people extra weight is motivating factor. For me it is not as instead of loosing weight I gained some 2-3 kg. Likely I’m so skinny there is no way to loose fat, though there is room for leg muscles growth. Unfortunately running is often boring and it is very hard to get yourself outside and go for a run on that nasty cold day. Here are few things that helped me running this year:

Run different routes

Always running at same location taking same path is boring. If you travel somewhere, just take your shoes with yourself and have a run at new place. Not only you get to have another run, but you explore the location. I ran in five different countries this year and could tell that those runs are more interesting than usual next to home ones.

Running in Ireland Gap Of Dunloe

Join friends or running club

I was going for runs rather rarely at the beginning of the year, but later as I started running with friends I started to run more frequently. It is always much more pleasant to have a conversation and learn few new things from friends, especially if areas of your interest overlap more than just running.

Running With Friends

Sign up for a challenge

Sports are competitive by nature. You can have friendly competition with your fellow runners, or you can take a virtual challenge. That works great because the clock is ticking and you want to have it done. This September I took it to the next level by signing up to Strava monthly challenges. I have completed all of them. See Trophy Case below:

Strava September Running Challenge

Be careful and avoid injury

You don’t want to walk with a cane when old because you were too stupid and run too much and too hard when young. I’m sick of running because of this September challenge. I completed it, but last two runs I ran through the pain being injured. I’m recovering now using RICE recovery technique. From now on I will take it easier. Suggesting the same for you.

RICE Recovery - Icing an Ankle

You can do it

I’m not a good runner. At the beginning of the year I could barely run 5km, I didn’t know how I will complete my planned 40 runs as it was so hard. Running 40 times was my main year goal. I did not expect that I will ran for around 50 times totaling 320 km. And it is not the end of the year yet. I also ran a Half-Marathon distance running a top Kahlenberg hill next to Vienna. If I can do it – you can!

Conclusion

I completely agree with research and studies that “for majority of people the benefits of running overweight the risks” and at the same time I voluntarily ran through the injury just to complete my challenge. Motivation is an important factor, but runners have to be careful and moderate their exercising. This is especially true if you run for the health reasons. Just try to make your runs more interesting and enjoy your life… longer.

Running under Speed sign


No comments


Amazon Interview Experience

April 19, 2016 Career, Opinion 4 comments

Just recently I went through the amazing interviewing process with Amazon. I had really positive experience and I would like to share with you.

Amazon Logo for a blog post on Amazon Interview Experience

Disclaimer: The opinions expressed herein are exclusively my own personal opinions. To the best of my knowledge and understanding, this blog post is compliant with Non-Disclosure Agreement I have signed.

Amazon Interview in general

First of all, it is not like any other interview I ever went to. Though, I have to mention that it is also the first time I go through the interview with a tech giant company.

The interviewing process is really language and frameworks agnostic and at the same time it is very code and problem-solving centric. So what happens is that you are not asked about any specifics of any given programming language, but you are expected to write a lot of code on a whiteboard. You are expected to solve problems and your way of tackling problems and your approaches are something that is very important and evaluated.

Now I see how Amazon manages to build great teams and deliver awesome products. In my opinion, this is because they hire problem solvers and people capable of coding and thinking. I believe bias factor is greatly reduced because of the way they do interviews.

Let me take you through my Amazon Interview experience. I have signed NDA, so I cannot share any specifics of projects they are hiring people for, nor it is appropriate to mention exact questions I was asked.

Stage 0: How to get into the process?

Amazon was organizing Hiring Event in Vienna (Austria) and one of their recruiters contacted me on LinkedIn. I don’t know whether there was anything interesting in my profile or not, but it is how it happened.

If you are actively looking for a job and want to get to the interview with one of big tech companies, seeking a direct referral from within the company is probably the best bet.

In my case, it was one of those rare HR messages which I didn’t want to ignore and decided to pursuit further. I was asked to share my resume. After it was reviewed, I was asked to do an online coding test.

Stage 1: Pre-screening coding test

To know whether you are even worth talking to there is a coding test. It is organized online, but you also have the option to program live with some coder.

You can program in any language you feel comfortable with, as long as it is one of the major ones.

My test had three problems to solve, two of which were writing code. It is very similar to what you can get on programming competitions (ACM, TopCoder, etc), but less formalized and with fewer tests. I personally spent way too much time on the first task, because I decided to implement an efficient solution using BFS. Later I realized that any solution that passes tests would be ok as well. So instead of mine O(n), O(n^2) would be fine to get me to the screening session. Two of solutions were passing their tests. I didn’t have time for the third question, so I just wrote that I didn’t have time and explained how I would have solved it.

Stage 2: Screening

Screening is just 30 minutes chat. They would only ask you basic computer science fundamentals. Questions would be completely agnostic to programming languages or technologies. I believe, they just want to check that you know basics of programming, you can express yourself and that you will feel comfortable during the interviews.

Stage 3: Interview Day

This is something awesome. My right thumb still hurts because of holding whiteboard marker. Something I really liked was that there was challenge, there was coding, there was problem solving.

Almost all other interviews I had before with other companies were standard boring questions about one specific technology stack, starting from persistence layer moving towards presentation layer. I never had to write code during my interview (maximum few lines of code). And only once I was asked to solve a specific problem for a start-up company.

I had four interviews during the day lasting for 50-60 minutes. For me, it looked like I would get a bit more senior level questions with every next interview, but I may be mistaken. I also don’t know if this is the way interviews are usually organized at Amazon.

Interview 1: Answer few situational questions, write code!

So, as I said, I cannot tell exactly what I was asked, but my first interviewer was a person who writes a lot of code himself. He would slowly transition me to the coding task, which was to implement some data structure of some sort. I guess I could be something else as well. I messed up the beginning, but later it was fairly easy and I had the implementation on a whiteboard.

Interview 2: Write code!

The second one was really interesting. It didn’t have much introduction. There was a coding problem right away. Unfortunately, I was a bit too sloppy on this one. I jumped to writing code too quickly. So, as first solution I implemented three loops O(n^3) and then, with guidance from my interviewer, I switched to writing O(n^2) solution. Somehow, even I was able to solve the problem, I feel I was too quick to jump to writing code and didn’t think enough up front. So, remember – less rush!

Interview 3: Answer few situational questions related to projects on a bit higher level, write code!

The third one was way more smooth at the beginning. Many situational questions. When we switched to writing code, I believe the problem was also more complex. I fairly quickly identified that the problem had to be solved using XYZ technique, but I struggled a lot with an actual implementation. I didn’t manage to have a complete solution, but I hope I was able to convince my interviewer that if I had a bit more time I would have had a complete solution in place.

Interview 4: Solve designing problem. Draw diagrams! And finally, ask some real questions about projects, teams, learning.

This one was the easiest for me. There were two designing questions. The first one was object-oriented design for some system. And the second one was system design for the very same system. I didn’t have issues with either one of those (at least I think so).

I was very quick with OO design. I’ve done a lot of design in the past and I believe in core principles of OOD. Besides, I have even written a book on design patterns. That had to play some role.

The system design part also went very smoothly. Probably because I have very relevant experience working for a big web company. Scalability, availability, performance principles for system design isn’t something new to me. I’ve never configured load balancer nor I ever setup Master-Slave-Replica, but I just knew these things and how they are supposed to work. Just by coincidence recently I read “Release It!” book and that could have played some role as well.

In the end, I could ask questions that were most important to me. Like questions about the team, learning and growth opportunities, projects, and technologies.

Stage 4: What happens next?

I don’t know. I’m not there yet.

Preparation

There is one aspect I don’t like about the interviewing process like this. Amazon, Facebook, Google, Microsoft are huge tech companies that have somewhat similar standardized interviewing process. You can read online about how it looks like and what are common questions. There is even book “Cracking the Coding Interview” that has 189 coding questions.

So hypothetically, let’s imagine there are two deep copies of me. And by “deep” I mean that we both have absolutely identical personalities, knowledge, background, experience. Now we are both to be interviewed in one month. Copy One doesn’t have time to prepare and just looks up what are the main types of questions. Copy Two takes a month of vacation and sits at TopCoder in practice rooms solving, say 300-500 problems. Copy One is also capable of doing it as both of us did some ACM in university 8 years ago. Now when the interview comes, Copy Two just shines and cracks all questions. Copy One struggles with solving problems but manages them somehow. So when the performance of these two copies is compared Copy Two wins. But what hiring company gets is almost identically same with a difference of 1 month of TopCoder practice.

Definitely, there are reasons why the process has to be standardized. It could have been way worse if it was otherwise.

I did prepare myself. Not as much as I would like, though. Buying a whiteboard for kids was definitely a good investment from my side. I did couple of problems on the whiteboard, watched couple of MIT lectures on algorithms, bought a copy of CLRS (I was even ready to use Master Theorem if asked).

Conclusion

I really liked interviewing process at Amazon. I appreciate all the time invested in interviewing me.

Disregard if I get an offer or not, or if I will have to decline it, I won’t change my opinion about this interviewing experience. Just going through Amazon Interview is worth the time and preparation effort.


4 comments


Working with Excel files using EPPlus and XLSX.JS

February 29, 2016 C#, JavaScript, Opinion, QuickTip, Tools No comments

Need to quickly generate an Excel file on the server (.NET) with no headache or need to import  another Excel file in your JS client?

I can recommend two libraries for their simplicity and ease of use.

XLSX.JS

To smooth transitioning from Excel files to electronic handling of data we offered our users possibility of importing data. As our application is web based it meant some JS library to work with Excel files. A bit of complication was that our users over time developed a habit of having all kinds of modifications in their “custom” Excel files. So something that would allow us easily work with different formats was a preference.

XLSX.JS library available on GitHub proved to be a good choice. I could only imagine how much better it is over some monsters that would only work in IE. I think starting documentation is fairly good, so I will just go through some bits and pieces from our use case.

Setting up XLSX.JS and reading files is straight forward: npm or bower, include of file and you are ready to write XLSX.readFile('test.xlsx') or App.XLSX.read(excelBinaryContents, {type: 'binary'}).

Reading as binary is probably a better bet as it will work in IE, though you will have to write some code to implement FileReader.prototype.readAsBinaryString() in IE. You can have a look at our implementation of file-select component on gist.

Using XLSX in your JavaScript is fairly easy, though there might be some hiccups with parsing dates. See this gist.

EPPlus

We also have two use cases where we need to generate Excel file on the server. One was to generate some documentation for business rules so we can have it up to date and share with our users at all times. It was implemented as part of CI that would save a file to a file system. The other use case was downloading of business related data via web interface. These two were super easy to do with open source library called EPPlus.

You just add EPPlus through NuGet and start using (var excelPackage = new ExcelPackage(newFileInfo)). See the gist below. First file demonstrates how to operate with cells and the other one just shows how you can use streams to make file downloadable.

These two libraries really helped me to efficiently implement some of the Excel file business use cases.

Next time I will have to generate Excel file on server or read it on client I will most certainly use these two again.


No comments


Beauty of Open Source in Practice

December 20, 2015 Opinion 2 comments

We found a bug in Internet Explorer. It was acknowledged as such in two months of exhausting e-mail communication and no fix was promised.

We found a bug in open source library. It got fixed over weekend after we raised an issue.

This summarises everything I wanted to share in this post. I have more, so continue reading.

As a disclaimer, I want to say that I don’t attempt to have a comprehensive look at open source versus closed source. This is just an example of what happened in my project.

Closed source code case with Internet Explorer

We use Microsoft technologies wherever possible unless there is no sensible solution to a problem we need to solve. Application we implemented is a large offline capable single page application with tons of controls rendered at once. We noticed that IE crashes after prolonged usage of the app, though we were not experiencing the same in other bowsers. It took us a while to realise that there was a legitimate memory leak in IE. More details on how we tried to troubleshoot the issue are here. Afterwards we started long and boring communication with Microsoft, which ended in them acknowledging a bug in IE. Actually, it was a known bug. They said that attempts to fix this bug caused them more troubles, so it is unlikely that the fix will appear in any of IE11 updates or in Edge browser. We got an approval for our users to use Chrome as it doesn’t have this memory issues and in general is much faster.

Open source code case with Jurassic

The app has plenty of shared logic that we want to execute both on client and server. We decided that we want it to be written in JavaScript. As our backend is .NET we used Jurassic library to compile JavaScript code on the server and then to execute it whenever we needed it. We also tried to use Edge.js, but at the moment we are not happy about its stability when run under IIS.

We stumbled upon an interesting bug. IL code emitted by Jurassic library was causing System.InvalidProgramException on some environments. We narrowed it down to a continue statement used in for loop. We noticed that this was only used in moment.js library. We modified the code of moment.js to avoid using continue statements. This fixed the issue so were already covered by open source since we could modify it. Of course, we didn’t stop there and posted a bug on Jurassic’s forum. The guy had a look over weekend and fixed the issue for us.

Conclusion

Of course, this is just one example where using open source proved to be a nice way to go. It doesn’t always work like that and at times it is a wrong choice. I mainly wanted to share this as it was such a striking and contrasting difference for me personally.


2 comments


Backup and Restore. Story and thoughts

April 7, 2014 HowTo, Opinion No comments

Usually when I have some adventures with backing-up and restoring I don’t write a blog post about that. But this time it was bit special. I unintentionally repartitioned external drive where I kept all of my system backups and large media files. This made me rethink my backup/restore strategy.

The story

I will begin with the story of my adventures. Currently I’m running Windows 8.1. The other evening I decided I want to play old game from my school days. Since I couldn’t find it I decided to play newer version still relatively old – released in 2006. As usual with old games and current OSs it wouldn’t start. As it is normal for programmer I didn’t give up. First of all there was something with DirectX. It was complaining that I need newer version, which I of course had, but it was way too new for the game to understand. After fixing it game still wouldn’t start because of other problems. I have changed few files in system32. It still didn’t help. Then I decided on other approach – installing WinXP on virtual machine and run it there. I did it with VirtualBox and it didn’t work because of some other issues. Then I found Win7 virtual machine I used before for VMware, but that VM didn’t want to start.

At this point I decided to give up with that game. So to compensate I started looking for small game I played in university. Unfortunately the other game also didn’t want to start by freezing my PC. After reboot… ah… actually there was no reboot since my Windows made its mind not to boot any longer!

Now I had to restore. Thankfully my Dell laptop had recovery boot partition and I was able to quickly restore to previous point in time. Not sure why Windows didn’t boot if recovery wasn’t pain at all.

After that happened I decided that I need additional restoring power. So I ran program by Dell called “Backup and Recovery” to create yet another backup of the system. Program asked me for drive and I found one free on my external HDD where I keep system images. Unfortunately I didn’t pay attention to care about what that special backup might do. It created bootable partition and of course repartitioned entire drive. I pulled USB cable when I realized what it started to do!

I had to recover again, but now files on repartitioned drive. If you look online there are some good programs that allow you to restore your deleted files and even find lost partitions. One of such is EaseUS, but it costs money and I didn’t want to pay for one time. Thus I found one free called “Find And Mount” that allows to find lost partition and mount it as another drive so I can copy files over. That’s good but for some reason speed of recovery was only 512Kbit/s so you can imagine how much time it would take to recover 2TB of stuff. I proceeded with restoring only the most important stuff. Maybe in total restoring took like 30+ hours.

Bit more on this story. Since I needed to restore so much stuff I didn’t have space for it. My laptop is only 256Gb SSD and my wife’s laptop (formerly mine) also has only that much. But I had 512 HDD left aside. So I just bought HDD external drive case for some 13 EUR and thus got some additional space.

So that was end of story. Now I want to document what I do and want to start doing in addition to be on the safe side.

Backup strategy

What’s are the top most important files not to lose? – These are photos and other things that are strongly personal. I’m pretty sure if you have some work projects they are already under source control or handled by other parties and responsible people. So work stuff is therefor less critical.

My idea is that your most important things should be just spread as much as possible. This is true for photos I have. They are just on every hard drive I ever had. At least 5 in Ukraine and 4 here in Austria. Older photos are also on multiple DVDs and CDs. Some photos are in Picasa – I’m still running old offer from Google 20Gb just for 4$ per year. All phone photos are automatically uploaded to OneDrive with 8Gb there. Also I used to have 100Gb on DropBox but then I found it too expensive so stopped keeping photos there.

All my personal projects and things I created on my own are treated almost the same as photos, only they are not so public, often encrypted.

So roughly backup strategy:

  • Photos and personal – as many copies as possible and at as many physical places as possible, including cloud storages
  • System  and all other – System Images and bootable drive, including usage of “File History” feature in Windows 8.1

I started to think if I want to buy NAS and some more Cloud. For now will see if I can get myself into a trouble again and what it would cost if it happens.

On the image below Disk 0 is my laptop’s disk. Disk 1 is where I now have complete images of 2 laptops at home, complete copy of Disk 2 and also Media junk. Disk 2 is another drive with Windows installed, which will now be used regularly for backups and for “File History”.

image_thumb-25255B11-25255D

Now some links and how-to info:

  1. Find and Mount application for restoring partitions
  2. EaseUS application with lots of recovery options, but costs money
  3. Enabling “File History” and creating System Image in Win8.1 can be done from here: Control PanelSystem and SecurityFile History
  4. External USB 3.0 HDD 2.5’’ Case I bought
  5. Total Commander has features to find duplicate files, synchronize dirs and many other which come handy when handling mess after spreading too many copies around

This was just a story to share with you but it emphasises one more time that backups are important.

P.S. I finally managed to play newer version of 2nd game :)


No comments